(250, 512) todo (2250,)
(500, 512) todo (2000,)
(750, 512) todo (1750,)
(1000, 512) todo (1500,)
(1250, 512) todo (1250,)
(1500, 512) todo (1000,)
(1750, 512) todo (750,)
(2000, 512) todo (500,)
(2250, 512) todo (250,)
(2500, 512) todo (0,)
True class: left
Text with highlighted words
DOCKERFILE_ENV_FILE file starting appearing...
With dokku master, I have a DOCKERFILE_ENV_FILE file in my app when I repush my app.
If I do ps:rebuild myapp, I don't have it.
The file is created from plugins/config/docker-args-deploy
To my understanding, this file should be only created if the app is not buildstep based.
I think there is an issue here, maybe $IMAGE is empty in the case of a git push deploy master?
Hi @michaelshobbs you wrote this code, is this a bug or a feature?
:)
Probably a bug. :smiley: I'll take a look.
A cursory run of `ps:rebuild` didn't yield the behavior you described.
However, if somehow there was no `dokku/$APP` image then we would
attempt to create this file and the `docker run` command to inspect the
image would fail. Not sure what the expected behavior of `is_image_buildstep_based` should be if the image is not found. Perhaps exit with a different status?
I can't reproduce it now, weird. I updated dokku yesterday, maybe there was a fix in it, I don't know. I close the issue.
空docker 1.4 and "Driver aufs failed to remove root filesystem", "device or resource busy"
I don't think I saw this under docker 1.3.x on aufs (boot2docker on OSX), though I have with devicemapper on centos server. Got this error after upgrading docker server, retrying the `fig up` worked.
I use `brew` and my docker client is still 1.3.2. Just saw the warning in the log below, I guess that's not really supported? I'll try upgrading docker client manually.
```
$ docker version
Client version: 1.3.2
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 39fa2fa
OS/Arch (client): darwin/amd64
Server version: 1.4.0
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 4595d4f
```
```
$ fig -f dns-fig.yml
-p dns up -d
Recreating dns_hardfile_1...
Recreating dns_dnsmasq_1...
Cannot destroy container
07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727: Driver
aufs failed to remove root filesystem 07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727: rename
/mnt/sda1/var/lib/docker/aufs/mnt/07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727
/mnt/sda1/var/lib/docker/aufs/mnt/07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727-removing:
device or resource busy
```
```
time="2014-12-14T20:21:27Z" level="info" msg="Container
07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727 failed to exit within 10 seconds of SIGTERM - using the force"
time="2014-12-14T20:21:27Z" level="debug" msg="Sending 9
to 07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727"
time="2014-12-14T20:21:28Z" level="info" msg="+job log(die,
07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727,
dns_dnsmasq:latest)"
time="2014-12-14T20:21:28Z" level="info" msg="-job log(die,
07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727,
dns_dnsmasq:latest) = OK (0)"
time="2014-12-14T20:21:28Z" level="info" msg="+job
release_interface(07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727)"
...
time="2014-12-14T20:21:29Z"
level="debug" msg="Calling DELETE /containers/{name:.*}"
time="2014-12-14T20:21:29Z" level="info" msg="DELETE
/v1.12/containers/07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727?link=False|force=False|v=False"
time="2014-12-14T20:21:29Z" level="info" msg="+job
rm(07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727)"
Cannot destroy container
07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727: Driver
aufs failed to remove root filesystem
07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727: r
ename
/mnt/sda1/var/lib/docker/aufs/mnt/07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727
/mnt/sda1/var/lib/docker/aufs/mnt/07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727
-removing:
device or resource busy
time="2014-12-14T20:21:29Z" level="info" msg="-job
rm(07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727) =
ERR (1)"
time="2014-12-14T20:21:29Z" level="error" msg="Handler for DELETE
/containers/{name:.*} returned error: Cannot destroy container
07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727: Driver a
ufs failed to remove root filesystem 07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727:
rename
/mnt/sda1/var/lib/docker/aufs/mnt/07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9
dd727
/mnt/sda1/var/lib/docker/aufs/mnt/07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727-removing:
device or resource busy"
time="2014-12-14T20:21:29Z" level="error" msg="HTTP Error:
statusCode=500 Cannot destroy container
07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727: Driver
aufs failed to remove root file
system 07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727:
rename
/mnt/sda1/var/lib/docker/aufs/mnt/07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727 /mnt/sda1/var/lib/docker
/aufs/mnt/07ba11ca55a87ec6556a27007dd56a42da370ed8d78b34aed2f573266e9dd727-removing: device or resource busy"
time="2014-12-14T20:24:15Z" level="debug" msg="Calling POST /containers/{name:.*}/exec"
time="2014-12-14T20:24:15Z" level="info" msg="POST /v1.15/containers/dns_dnsmasq_1/exec"
time="2014-12-14T20:24:15Z" level="debug" msg="Warning: client and server don't have the same version (client: 1.3.2, server: 1.4.0)"
```
Still seeing this frequently on `fig up -d`, even at docker 1.4.1 on both client and server. Maybe having luck avoiding it with `--no-recreate`
```
FATA[0167] Error response from daemon: Cannot destroy
container
74555d1513c4a9f3fc90ec0bb94bcf5b5ef558b6a54c5ff8a209ddd079fbf8f0: Driver
aufs failed to remove root filesystem
74555d1513c4a9f3fc90ec0bb94bcf5b5ef558b6a54c5ff8a209ddd079fbf8f0: rename
/mnt/sda1/var/lib/docker/aufs/mnt/74555d1513c4a9f3fc90ec0bb94bcf5b5ef558b6a54c5ff8a209ddd079fbf8f0
/mnt/sda1/var/lib/docker/aufs/mnt/74555d1513c4a9f3fc90ec0bb94bcf5b5ef558b6a54c5ff8a209ddd079fbf8f0-removing:
device or resource busy
```
```
$ docker info
Containers: 28
Images: 1356
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Dirs: 1424
Execution Driver: native-0.2
Kernel Version: 3.16.7-tinycore64
Operating System: Boot2Docker 1.4.1 (TCL 5.4); master : 86f7ec8 - Tue Dec 16 23:11:29 UTC 2014
CPUs: 8
Total Memory: 3.866 GiB
Name: boot2docker
ID: 665E:OA4X:64MF:PVJF:WKXR:Q7V2:455C:HS5K:SYQW:VZIU:SI4C:HL5M
Debug mode (server): true
Debug mode (client): false
Fds: 110
Goroutines: 83
EventsListeners: 1
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
```
I'm seeing the same issue on 1.4.1.
```
$ docker info
Containers: 1
Images: 11
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Dirs: 17
Execution Driver: native-0.2
Kernel Version: 3.16.7-tinycore64
Operating System: Boot2Docker 1.4.1 (TCL 5.4); master : 86f7ec8 - Tue Dec 16 23:11:29 UTC 2014
CPUs: 8
Total Memory: 1.961 GiB
Name: boot2docker
ID: LBNP:H3GO:XVDK:GTCP:FNOX:OPY4:QJUP:QIKK:HAKK:TABZ:3NP6:4N7S
Debug mode (server): true
Debug mode (client): false
Fds: 17
Goroutines: 18
EventsListeners: 0
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
```
I'm seeing this too on Docker 1.4.1 (both client and server). But target container seems to be destroyed, at least it is not present in the `docker ps -a` output.
This may be unrelated, but just in case: I perform periodic health checks using ephemeral containers attached to the target container's network (via the `--net container:xyz` flag). It seems that one of these containers was started right before the moment target
container died. As a result, this check container failed to start
successfully because target network was already destroyed (and this is
totally ok).
Here is the log (I replaced ids of target and check containers with
TARGET and CHECK for clarity):
``` text
//
// Creating and starting target container
// docker run -it --rm -p 10.0.42.1:4000:3000 test-app
//
time="2015-01-20T19:36:17Z" level="info" msg="POST /v1.16/containers/create"
time="2015-01-20T19:36:17Z" level="info" msg="+job create()"
time="2015-01-20T19:36:17Z" level="info" msg="+job log(create, TARGET, test-app:latest)"
time="2015-01-20T19:36:17Z" level="info" msg="-job log(create, TARGET, test-app:latest) = OK (0)"
time="2015-01-20T19:36:17Z" level="info" msg="-job create() = OK (0)"
time="2015-01-20T19:36:17Z" level="info" msg="POST /v1.16/containers/TARGET/attach?stderr=1|stdin=1|stdout=1|stream=1"
time="2015-01-20T19:36:17Z" level="info" msg="+job container_inspect(TARGET)"
time="2015-01-20T19:36:17Z" level="info" msg="-job container_inspect(TARGET) = OK (0)"
time="2015-01-20T19:36:17Z" level="info" msg="+job attach(TARGET)"
time="2015-01-20T19:36:17Z" level="info" msg="POST /v1.16/containers/TARGET/start"
time="2015-01-20T19:36:17Z" level="info" msg="+job start(TARGET)"
time="2015-01-20T19:36:17Z" level="info" msg="+job allocate_interface(TARGET)"
time="2015-01-20T19:36:17Z" level="info" msg="-job allocate_interface(TARGET) = OK (0)"
time="2015-01-20T19:36:17Z" level="info" msg="+job allocate_port(TARGET)"
time="2015-01-20T19:36:17Z" level="info" msg="-job allocate_port(TARGET) = OK (0)"
time="2015-01-20T19:36:17Z" level="info" msg="+job log(start, TARGET, test-app:latest)"
time="2015-01-20T19:36:17Z" level="info" msg="-job log(start, TARGET, test-app:latest) = OK (0)"
time="2015-01-20T19:36:17Z" level="info" msg="GET /containers/TARGET/json"
time="2015-01-20T19:36:17Z" level="info" msg="+job container_inspect(TARGET)"
time="2015-01-20T19:36:17Z" level="info" msg="-job start(TARGET) = OK (0)"
time="2015-01-20T19:36:17Z" level="info" msg="-job container_inspect(TARGET) = OK (0)"
time="2015-01-20T19:36:17Z" level="info" msg="POST /v1.16/containers/TARGET/resize?h=25|w=180"
time="2015-01-20T19:36:17Z" level="info" msg="+job resize(TARGET, 25, 180)"
time="2015-01-20T19:36:17Z" level="info" msg="-job resize(TARGET, 25, 180) = OK (0)"
// ... (unrelated log skipped)
//
// Creating and starting ephemeral check container
// docker run --name chk-N --net container:TARGET check-img some_check_command
//
time="2015-01-20T19:36:46Z" level="info" msg="POST /v1.16/containers/create?name=chk-N"
time="2015-01-20T19:36:46Z" level="info" msg="+job create(chk-N)"
time="2015-01-20T19:36:46Z" level="info" msg="+job log(create, CHECK, consul:latest)"
time="2015-01-20T19:36:46Z" level="info" msg="-job log(create, CHECK, consul:latest) = OK (0)"
time="2015-01-20T19:36:46Z" level="info" msg="-job create(chk-N) = OK (0)"
time="2015-01-20T19:36:46Z" level="info" msg="POST /v1.16/containers/CHECK/attach?stderr=1|stdout=1|stream=1"
time="2015-01-20T19:36:46Z" level="info" msg="+job container_inspect(CHECK)"
time="2015-01-20T19:36:46Z" level="info" msg="-job container_inspect(CHECK) = OK (0)"
time="2015-01-20T19:36:46Z" level="info" msg="+job attach(CHECK)"
time="2015-01-20T19:36:46Z" level="info" msg="POST /v1.16/containers/CHECK/start"
time="2015-01-20T19:36:46Z" level="info" msg="+job start(CHECK)"
time="2015-01-20T19:36:46Z" level="info" msg="+job log(start, CHECK, consul:latest)"
time="2015-01-20T19:36:46Z" level="info" msg="-job log(start, CHECK, consul:latest) = OK (0)"
//
// Target container died
//
time="2015-01-20T19:36:47Z" level="info" msg="+job log(die, TARGET, test-app:latest)"
time="2015-01-20T19:36:47Z" level="info" msg="-job log(die, TARGET, test-app:latest) = OK (0)"
time="2015-01-20T19:36:47Z" level="info" msg="+job release_interface(TARGET)"
time="2015-01-20T19:36:47Z"
level="info" msg="GET /containers/CHECK/json"
time="2015-01-20T19:36:47Z" level="info" msg="+job
container_inspect(CHECK)"
time="2015-01-20T19:36:47Z" level="info" msg="-job attach(TARGET) = OK
(0)"
time="2015-01-20T19:36:47Z" level="info" msg="POST
/v1.16/containers/TARGET/wait"
time="2015-01-20T19:36:47Z" level="info" msg="+job wait(TARGET)"
time="2015-01-20T19:36:47Z" level="info" msg="-job
release_interface(TARGET) = OK (0)"
time="2015-01-20T19:36:47Z" level="info" msg="-job wait(TARGET) = OK
(0)"
time="2015-01-20T19:36:47Z" level="info" msg="GET
/v1.16/containers/TARGET/json"
time="2015-01-20T19:36:47Z" level="info" msg="+job
container_inspect(TARGET)"
time="2015-01-20T19:36:47Z" level="info" msg="-job
container_inspect(TARGET) = OK (0)"
//
// Removing target container (as a consequence of --rm cli flag)
//
time="2015-01-20T19:36:47Z" level="info" msg="DELETE
/v1.16/containers/TARGET?v=1"
time="2015-01-20T19:36:47Z" level="info" msg="+job rm(TARGET)"
//
// Error removing target container: device or resource busy. This is not
OK.
//
Cannot destroy container TARGET: Driver aufs failed to remove root
filesystem TARGET: rename /mnt/sda1/var/lib/docker/aufs/mnt/TARGET
/mnt/sda1/var/lib/docker/aufs/mnt/TARGET-removing: device or resource
busy
time="2015-01-20T19:36:47Z" level="info" msg="-job rm(TARGET) = ERR (1)"
time="2015-01-20T19:36:47Z" level="error" msg="Handler for DELETE
/containers/{name:.*} returned error: Cannot destroy container TARGET: Driver aufs failed to remove root filesystem TARGET: rename /mnt/sda1/var/lib/docker/aufs/mnt/TARGET
/mnt/sda1/var/lib/docker/aufs/mnt/TARGET-removing: device or resource
busy"
time="2015-01-20T19:36:47Z" level="error" msg="HTTP Error:
statusCode=500 Cannot destroy container TARGET: Driver aufs failed to
remove root filesystem TARGET: rename
/mnt/sda1/var/lib/docker/aufs/mnt/TARGET
/mnt/sda1/var/lib/docker/aufs/mnt/TARGET-removing: device or resource
busy"
//
// Error starting check container: TARGET network is already destroyed.
This is OK.
//
time="2015-01-20T19:36:47Z" level="info" msg="+job
release_interface(CHECK)"
No network information to release for CHECK
time="2015-01-20T19:36:47Z" level="info" msg="-job
release_interface(CHECK) = ERR (1)"
time="2015-01-20T19:36:47Z" level="info" msg="-job attach(CHECK) = OK
(0)"
time="2015-01-20T19:36:47Z" level="info" msg="+job
release_interface(CHECK)"
No network information to release for CHECK
time="2015-01-20T19:36:47Z" level="info" msg="-job
release_interface(CHECK) = ERR (1)"
time="2015-01-20T19:36:47Z" level="info" msg="+job log(die, CHECK,
consul:latest)"
time="2015-01-20T19:36:47Z" level="info" msg="-job log(die, CHECK,
consul:latest) = OK (0)"
Cannot start container CHECK: setup networking failed get network
namespace fd: open /proc/10959/ns/net: no such file or directory
time="2015-01-20T19:36:47Z" level="info" msg="-job start(CHECK) = ERR
(1)"
time="2015-01-20T19:36:47Z" level="error" msg="Handler for POST
/containers/{name:.*}/start returned error: Cannot start container
CHECK: setup networking failed get network namespace fd: open /proc/10959/ns/net: no such file or directory"
time="2015-01-20T19:36:47Z" level="error" msg="HTTP Error:
statusCode=404 Cannot start container CHECK: setup networking failed get
network namespace fd: open /proc/10959/ns/net: no such file or
directory"
```
Docker version | info:
``` text
$ docker version
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8
OS/Arch (client): darwin/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8
$ docker info
Containers: 5
Images: 105
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Dirs: 117
Execution Driver: native-0.2
Kernel Version: 3.16.7-tinycore64
Operating System: Boot2Docker 1.4.1 (TCL 5.4); master : 86f7ec8 - Tue Dec 16 23:11:29 UTC 2014
CPUs: 4
Total Memory: 1.961 GiB
Name: boot2docker
ID: LP33:7HWL:DFI6:5AL5:RNBL:D5ZJ:AMGD:WO3I:DGTW:SKHZ:3WFW:KNMJ
Debug mode (server): true
Debug mode (client): false
Fds: 36
Goroutines: 55
EventsListeners: 1
Init Path: /usr/local/bin/docker
Docker Root Dir: /mnt/sda1/var/lib/docker
```
Same here, although not through boot2docker but on Ubuntu 14.04
directly. We have a bunch of health checks through progrium's
[docker-consul](https://github.com/progrium/docker-consul) which are
just curl commands that run inside a container. These containers are
frequently not removed. They are created using a bash script that
creates a container from within the consul container by having the
`docker.sock` mounted. It's using `docker run --rm --net
container:$container_id .....` as well.
The event stream shows that most health checking containers do die and
destroy correctly.
```
% docker info
Containers: 3596
Images: 222
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Dirs: 7423
Execution Driver: native-0.2
Kernel Version: 3.13.0-44-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 4
Total Memory: 15.67 GiB
Name: docker02
ID: EOVE:DWGJ:AE2W:YLT5:F27M:IEJL:Q33E:FQCJ:ASIN:EHCM:ACH4:WJND
Username: bvadevops
Registry: [https://index.docker.io/v1/]
WARNING: No swap limit support
```
```
% docker version
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8
OS/Arch (client): linux/amd64
Server
version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8
```
@remmelt, these containers are not removed because Docker ignores `--rm`
flag when there is a problem launching container. This is a weird and probably undocumented, but expected behavior of Docker, so what
you're seeing is in fact a `docker-consul`'s issue.
See [this
comment](https://github.com/progrium/docker-consul/issues/45#issuecomment-70267512)
for more info. I made [a
fork](https://github.com/ficusio/docker-consul) that resolves this and several other issues of `docker-consul`. Here is [the related commit](https://github.com/ficusio/docker-consul/commit/52263479d964db902a035e419bfb98101ae30903). I'll
try to get these fixes merged back to the original repo, but probably
later, as I have really little time now.
The issue being discussed here occurs to me when I try to remove
_target_ container (i.e. the container that health checks run against).
And it is accompanied by `Driver aufs failed to remove root filesystem`
error message. And, despite of an error message, the container gets removed, at least it disappears from the `docker ps -a` list.
Thank you!
I also see this error sometimes when running the Flocker test suite. For example:
```
[ERROR]
Traceback (most recent call last):
File "/home/exarkun/Environments/flocker/local/lib/python2.7/site-packages/twisted/trial/util.py", line 296, in _runSequentially
thing = yield d
File "/home/exarkun/Environments/flocker/local/lib/python2.7/site-packages/twisted/python/threadpool.py", line 196, in _worker
result = context.call(ctx, function, *args, **kwargs)
File "/home/exarkun/Environments/flocker/local/lib/python2.7/site-packages/twisted/python/context.py", line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/home/exarkun/Environments/flocker/local/lib/python2.7/site-packages/twisted/python/context.py", line 81, in callWithContext
return func(*args,**kw)
File "/home/exarkun/Projects/flocker/master/flocker/node/_docker.py", line 507, in _remove
self._client.remove_container(container_name)
File "/home/exarkun/Environments/flocker/local/lib/python2.7/site-packages/docker/client.py", line 896, in remove_container
self._raise_for_status(res)
File "/home/exarkun/Environments/flocker/local/lib/python2.7/site-packages/docker/client.py", line 94, in _raise_for_status
raise errors.APIError(e, response, explanation=explanation)
docker.errors.APIError: 500 Server Error: Internal Server Error ("Cannot
destroy container
DockerClientTests-flocker-node-functional-test_docker-DockerClientTests-test_list_removed_containers-888568622283900587922261:
Driver aufs failed to remove root filesystem
83692922f272e460b71efdecf129064d3279f5072c2c1e70afcb38c73798c2fa: rename
/var/lib/docker/aufs/diff/83692922f272e460b71efdecf129064d3279f5072c2c1e70afcb38c73798c2fa
/var/lib/docker/aufs/diff/83692922f272e460b71efdecf129064d3279f5072c2c1e70afcb38c73798c2fa-removing: device or resource busy")
flocker.node.functional.test_docker.DockerClientTests.test_list_removed_containers
```
and
```
[ERROR]
Traceback (most recent call last):
File "/home/exarkun/Environments/flocker/local/lib/python2.7/site-packages/twisted/python/threadpool.py", line 196, in _worker
result = context.call(ctx, function, *args, **kwargs)
File
"/home/exarkun/Environments/flocker/local/lib/python2.7/site-packages/twisted/python/context.py",
line 118, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File
"/home/exarkun/Environments/flocker/local/lib/python2.7/site-packages/twisted/python/context.py",
line 81, in callWithContext
return func(*args,**kw)
File "/home/exarkun/Projects/flocker/master/flocker/node/_docker.py",
line 507, in _remove
self._client.remove_container(container_name)
File "/home/exarkun/Environments/flocker/local/lib/python2.7/site-packages/docker/client.py",
line 896, in remove_container
self._raise_for_status(res)
File
"/home/exarkun/Environments/flocker/local/lib/python2.7/site-packages/docker/client.py",
line 94, in _raise_for_status
raise errors.APIError(e, response, explanation=explanation)
docker.errors.APIError: 500 Server Error: Internal Server Error ("Cannot
destroy container flocker--912891792647--622134077079: Driver aufs
failed to remove root filesystem
005d96d8ce4ca2d74242cc3873772274dff60fb5980b2ce05413d18e9d0d9d70: rename
/var/lib/docker/aufs/diff/005d96d8ce4ca2d74242cc3873772274dff60fb5980b2ce05413d18e9d0d9d70
/var/lib/docker/aufs/diff/005d96d8ce4ca2d74242cc3873772274dff60fb5980b2ce05413d18e9d0d9d70-removing:
device or resource busy")
flocker.node.functional.test_docker.IDockerClientNamespacedTests.test_add_and_remove
```
These
are both tests that use the Docker API to create and then quickly stop
and remove a container.
```
$ sudo docker version
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8
OS/Arch (client): linux/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8
$ sudo docker info
Containers: 3
Images: 39
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Dirs: 51
Execution Driver: native-0.2
Kernel Version: 3.13.0-44-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 8
Total Memory: 62.89 GiB
Name: preon
ID: REGQ:VSJY:T7LE:7RHM:GCW7:QHGZ:K2YJ:L72Z:KLP3:54DY:JO27:EM35
WARNING: No swap limit support
```
I’m
also observing this, docker 1.4.1 on Linux 3.16.0-0.bpo.4-amd64.
And some numbers: Of 1820 created short-living containers, 240 caused
this error.
This is particularly annoying as `docker run` will print this error
message on `stderr`, confusing tools that use the output of that to
determine whether the actual job was successful.
Upgraded to 1.5.0, same problem.
Simple repro case. Under zsh, run:
while true; do echo "exit" | docker run -i --rm busybox; if (( ${?} != 0 )); then break; fi; done
Run 2-3 parallel instances of the script, if desired.
Environment:
~% docker info % 15-04-02 15:07
Containers: 222
Images: 4
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 450
Execution Driver: native-0.2
Kernel Version: 3.13.0-48-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 12
Total Memory: 23.54 GiB
Name: REDACTED
ID: REDACTED
Debug mode (server): true
Debug mode (client): false
Fds: 18
Goroutines: 26
EventsListeners: 0
Init Path: /usr/bin/docker
Docker Root Dir: /var/lib/docker
WARNING: No swap limit support
~% docker version % 15-04-02 15:07
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.1
Git commit (client): a8a31ef
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.1
Git commit (server): a8a31ef
~% % 15-04-02 15:07
I tried the reproducer but unfortunately wasn't able to trigger this failure. How long do you typically run this before you see a problem?
Here's my docker info:
```
Containers: 20
Images: 231
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 271
Execution Driver: native-0.2
Kernel Version: 3.13.0-44-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 8
Total Memory: 62.89 GiB
Name: ...
ID: ...
WARNING: No swap limit support
```
and docker version
```
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.1
Git commit (client): a8a31ef
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.1
Git commit (server): a8a31ef
```
At home, I can repro this in a few seconds
on an Ubuntu instance under VMWare (on a Windows host). I don't have
the docker info/version handy at this time.
At work, I can repro this in at most 15-20 minutes with 2 scripts
running concurrently (on the configuration described above). I'll try
more concurrent scripts today.
I experience this problem in ubuntu-14.04 on a vmware host
+1
Also with the Docker v1.6.0 RC5 on Ubuntu 12.04 with trusty backport kernel:
``` bash
sysadmin@slave2:~$ sudo docker info || sudo docker version
Containers: 1
Images: 805
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 807
Dirperm1 Supported: false
Execution Driver: native-0.2
Kernel Version: 3.13.0-49-generic
Operating System: Ubuntu precise (12.04.5 LTS)
CPUs: 4
Total Memory: 3.86 GiB
Name: slave2
ID: L2Y4:4XL5:J56E:SWOQ:EV7G:FBTU:4ACZ:4LBB:RAQW:FBNP:MT32:7JH7
Client version: 1.6.0-rc5
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): fc4825d
OS/Arch (client): linux/amd64
Server version: 1.6.0-rc5
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): fc4825d
OS/Arch (server): linux/amd64
```
And with the current Docker v1.5.0 on Ubuntu 12.04 with trusty backport kernel:
``` bash
sysadmin@slave2:~$ sudo docker version
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.4.1
Git commit (client): a8a31ef
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.4.1
Git commit (server): a8a31ef
sysadmin@slave2:~$
sysadmin@slave2:~$ sudo docker info
Containers: 118
Images: 1510
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 1888
Execution Driver: native-0.2
Kernel Version: 3.13.0-48-generic
Operating System: Ubuntu precise (12.04.5 LTS)
CPUs: 4
Total Memory: 3.86 GiB
Name: slave2
ID: L2Y4:4XL5:J56E:SWOQ:EV7G:FBTU:4ACZ:4LBB:RAQW:FBNP:MT32:7JH7
```
I encounter this issue as well with docker 1.4.1. It happens on a busy machine within a processing pipeline:
```
Error response from daemon: Cannot destroy container eccd3089dc18912001f2047431810e14c4b1808b470bac063a39f87bee16d5be:
Driver aufs failed to remove root filesystem eccd3089dc18912001f2047431810e14c4b1808b470bac063a39f87bee16d5be: rename /var/lib/docker/aufs/diff/eccd3089dc18912001f2047431810e14c4b1808b470bac063a39f87bee16d5be
/var/lib/docker/aufs/diff/eccd3089dc18912001f2047431810e14c4b1808b470bac063a39f87bee16d5be-removing:
device or resource busy
```
+1
I am seeing the same as well with docker 1.6.0. I dont really have
anything complicated running on the machine, but the containers I am
trying to remove seem to be dead, if that matters.
```
sysadmin@docker10:~$ tail -F /var/logs/docker.log
time="2015-04-17T12:25:31Z"
level=info msg="DELETE /v1.18/containers/b49bb2f86b12"
time="2015-04-17T12:25:31Z" level=info msg="+job rm(b49bb2f86b12)"
Cannot destroy container b49bb2f86b12: Driver aufs failed to remove root
filesystem b49bb2f86b12163de792f9511fc4bffb49cb8b880a0a251e12591642ffb3b8eb: rename /var/lib/docker/aufs/diff/b49bb2f86b12163de792f9511fc4bffb49cb8b880a0a251e12591642ffb3b8eb
/var/lib/docker/aufs/diff/b49bb2f86b12163de792f9511fc4bffb49cb8b880a0a251e12591642ffb3b8eb-removing:
device or resource busy
time="2015-04-17T12:25:31Z" level=info msg="-job rm(b49bb2f86b12) = ERR
(1)"
time="2015-04-17T12:25:31Z" level=error msg="Handler for DELETE /containers/{name:.*}
returned error: Cannot destroy container b49bb2f86b12: Driver aufs
failed to remove root filesystem
b49bb2f86b12163de792f9511fc4bffb49cb8b880a0a251e12591642ffb3b8eb: rename
/var/lib/docker/aufs/diff/b49bb2f86b12163de792f9511fc4bffb49cb8b880a0a251e12591642ffb3b8eb /var/lib/docker/aufs/diff/b49bb2f86b12163de792f9511fc4bffb49cb8b880a0a251e12591642ffb3b8eb-removing: device or resource busy"
time="2015-04-17T12:25:31Z" level=error msg="HTTP
Error: statusCode=500 Cannot destroy container b49bb2f86b12: Driver
aufs failed to remove root filesystem
b49bb2f86b12163de792f9511fc4bffb49cb8b880a0a251e12591642ffb3b8eb: rename
/var/lib/docker/aufs/diff/b49bb2f86b12163de792f9511fc4bffb49cb8b880a0a251e12591642ffb3b8eb
/var/lib/docker/aufs/diff/b49bb2f86b12163de792f9511fc4bffb49cb8b880a0a251e12591642ffb3b8eb-removing:
device or resource busy"
Error response from daemon: Cannot destroy container b49bb2f86b12: Driver aufs failed to remove root filesystem b49bb2f86b12163de792f9511fc4bffb49cb8b880a0a251e12591642ffb3b8eb:
rename
/var/lib/docker/aufs/diff/b49bb2f86b12163de792f9511fc4bffb49cb8b880a0a251e12591642ffb3b8eb
/var/lib/docker/aufs/diff/b49bb2f86b12163de792f9511fc4bffb49cb8b880a0a251e12591642ffb3b8eb-removing:
device or resource busy
time="2015-04-17T12:25:31Z" level=info msg="DELETE
/v1.18/containers/eac104ae9157"
time="2015-04-17T12:25:31Z" level=info msg="+job rm(eac104ae9157)"
Cannot destroy container eac104ae9157: Driver aufs failed to remove root
filesystem
eac104ae915764a2bd181fe818e0a6431d330c08f9c2369ddaede48b4b689eb8: rename
/var/lib/docker/aufs/diff/eac104ae915764a2bd181fe818e0a6431d330c08f9c2369ddaede48b4b689eb8 /var/lib/docker/aufs/diff/eac104ae915764a2bd181fe818e0a6431d330c08f9c2369ddaede48b4b689eb8-removing: device or resource busy
time="2015-04-17T12:25:31Z" level=info msg="-job rm(eac104ae9157)
= ERR (1)"
time="2015-04-17T12:25:31Z" level=error msg="Handler for DELETE
/containers/{name:.*} returned error: Cannot destroy container
eac104ae9157: Driver aufs failed to remove root filesystem
eac104ae915764a2bd181fe818e0a6431d330c08f9c2369ddaede48b4b689eb8: rename /var/lib/docker/aufs/diff/eac104ae915764a2bd181fe818e0a6431d330c08f9c2369ddaede48b4b689eb8 /var/lib/docker/aufs/diff/eac104ae915764a2bd181fe818e0a6431d330c08f9c2369ddaede48b4b689eb8-removing:
device or resource busy"
time="2015-04-17T12:25:31Z" level=error msg="HTTP Error: statusCode=500
Cannot destroy container eac104ae9157: Driver aufs failed to remove root
filesystem
eac104ae915764a2bd181fe818e0a6431d330c08f9c2369ddaede48b4b689eb8: rename
/var/lib/docker/aufs/diff/eac104ae915764a2bd181fe818e0a6431d330c08f9c2369ddaede48b4b689eb8
/var/lib/docker/aufs/diff/eac104ae915764a2bd181fe818e0a6431d330c08f9c2369ddaede48b4b689eb8-removing:
device or resource busy"
Error response from daemon: Cannot destroy container eac104ae9157:
Driver aufs failed to remove root filesystem
eac104ae915764a2bd181fe818e0a6431d330c08f9c2369ddaede48b4b689eb8: rename
/var/lib/docker/aufs/diff/eac104ae915764a2bd181fe818e0a6431d330c08f9c2369ddaede48b4b689eb8 /var/lib/docker/aufs/diff/eac104ae915764a2bd181fe818e0a6431d330c08f9c2369ddaede48b4b689eb8-removing: device or resource busy
```
```
sysadmin@docker10:~$ sudo docker version
Client version: 1.6.0
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): 4749651
OS/Arch (client): linux/amd64
Server version: 1.6.0
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): 4749651
OS/Arch (server): linux/amd64
```
```
sysadmin@docker10:~$ sudo docker info
Containers: 3
Images: 81
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 87
Dirperm1 Supported: false
Execution Driver: native-0.2
Kernel Version: 3.10.48-3.10.6-amd64-13160847
Operating System: Ubuntu precise (12.04.2 LTS)
CPUs: 8
Total Memory: 15.68 GiB
Name: docker10
ID: CR2N:5JNE:S6XA:NNXS:FEQD:IVTM:7NCL:INCB:3AAS:CCKJ:XWUU:5FF2
```
```
sysadmin@docker10:~$ sudo docker ps -a
CONTAINER ID
IMAGE
COMMAND CREATED STATUS PORTS
NAMES
8aa56e3ba83f gliderlabs/registrator:latest
"/bin/sh -c '/bin/re About an hour ago Up About an
hour dope_container
b49bb2f86b12 hub.abc.com/application/test-img:latest
"/bin/sh -c '/usr/lo 20 hours ago Dead
0.0.0.0:31000-|8080/tcp
eac104ae9157 gliderlabs/registrator:latest
"/bin/sh -c '/bin/re 20 hours
ago Dead
```
Albeit I'm on an older version of Ubuntu:
```
13:13:10-ubuntu~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 13.10
Release: 13.10
Codename: saucy
```
My guess is that I'd see this in newer versions as well based on the thread here.
I observed this error today as well:
```
13:05:17-ubuntu~$ sudo service docker stop
docker stop/waiting
13:06:26-ubuntu~$ sudo rm -rf /var/lib/docker
rm: cannot remove ‘/var/lib/docker/aufs’: Device or resource busy
13:06:29-ubuntu~$ ps aux | grep dockerubuntu 4313 0.0 0.0 9692 948 pts/9 S+ 13:06 0:00 grep --color=auto docker
13:06:47-ubuntu~$
```
I hit this on `docker version: 1.6.0`.
I was able to remove the container
with `docker rm -f`
I originally reported this in Dec 2014 with docker 1.3.x. FWIW I've been
using docker-compose and docker client|server 1.5 and 1.6 with
boot2docker on osx and docker-machine, and have not seen the `Cannot
destroy container` error.
I hit this on docker version: 1.6.0
the container cannot stop,just hang on it
and after restart docker service
it cannot remove
I saw this on docker 1.6.0 build 4749651 on Ubuntu 14.04. I had some
containers (not sure whether they were running on killed) on docker 1.5.0. I upgraded to docker 1.6.0 and then immediately afterwards
tried to remove the containers.
`docker rm -f` seemed to work
I spoke too soon, just saw this while doing a few simultaneous
Dockerfile builds with docker 1.6.0 client on OSX against a digitalocean
1.6.1 server.
```
Error removing intermediate container a0568433a48a: Driver aufs failed
to remove root filesystem a0568433a4...: rename
/var/lib/docker/aufs/diff/a0568433a48a...
/var/lib/docker/aufs/diff/a0568433a4...-removing: device or resource
busy
```
`rm -f` it isn't the solution, it just remove the container entry from `ps -a`, volume still remain there, eating up precious disk space.
@gionn What
about rm -vf?
@billsmith `docker rm -f -v name` doesn't work too, when the server
under "heavy" load. we run all our tests with docker and when too much
tests run in parallel the remove of the container fails with this error.
in case any swarm users encounter this..
I hit a variant of this with swarm just now. I restarted a 'swarm join'
container on a CoreOS node that was suffering from health issues, which I suspect triggered a few container restart policies all at once.
`$ docker rm -f $id`
`Error response from daemon: 500 Internal Server Error: Cannot destroy
container $id: Driver overlay failed to remove root filesystem $id:
remove /var/lib/docker/overlay/$id/merged: device or resource busy
FATA[0001] Error: failed to remove one or more containers`
I waited 5 minutes, ran the same rm -f $id, and it said 'no such container', so I don't think this is a major issue. I just restarted another container with
my stuff on it.
Also seeing this under Debian wheezy when running a docker-running
container inside Kubernetes. docker rm -f seems to remove the container. Running Docker 1.6.
+1, http://stackoverflow.com/questions/30550472/docker-container-with-status-dead-after-consul-healthcheck-runs
no solution yet?
In the meantime I've setup a stinky workaround for this bug in the chef recipe we use to deploy containers.
These are the findings:
- it always happens on certain containers, hence I wasn't able to track down a specific characteristics e.g. volume, ports, etc (even a stupid ubuntu or centos running a sleep 9999 will
trigger this, but not on a clean system, it just start to happen after a
while)
- the dead container can be removed successfully (with `docker rm`
without `-f` flag) after a new version of the same image is run (leading
to a new dead container, but the old can now be removed)
+1 I hit this all the time, it is not consistent with any specific
dockerfile
```
Error removing intermediate container
9b63e9a1e9bf: Driver aufs failed to remove root filesystem
9b63e9a1e9bf1aaa97d943b4c082eaa152e38d55bf1884bc28d88de1cc891f1a: rename
/docker/aufs/mnt/9b63e9a1e9bf1aaa97d943b4c082eaa152e38d55bf1884bc28d88de1cc891f1a
/docker/aufs/mnt/9b63e9a1e9bf1aaa97d943b4c082eaa152e38d55bf1884bc28d88de1cc891f1a-removing:
device or resource busy
```
info
```
root@zapdos:/home/zapdos# docker -D version || docker -D info
Client version: 1.6.0
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): 4749651
OS/Arch (client): linux/amd64
Server version: 1.6.0
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): 4749651
OS/Arch (server): linux/amd64
Containers: 650
Images: 8317
Storage Driver: aufs
Root Dir: /docker/aufs
Backing Filesystem: extfs
Dirs: 9626
Dirperm1 Supported: false
Execution Driver: native-0.2
Kernel Version: 3.13.0-29-generic
Operating System: Ubuntu 14.04 LTS
CPUs: 32
Total Memory: 58.81 GiB
Name: zapdos
ID: HHHH:IIII:MRNF:OYNJ:FZNT:X4EA:ZWQR:TF7G:EEEE:HSP6:RYBS:F3ER
Debug mode (server): false
Debug mode (client): true
Fds: 1111
Goroutines: 3753
System Time: Wed Jul 15 18:54:22 UTC 2015
EventsListeners: 1
Init SHA1: 11117501111dbf64c1111c27861111027c
Init Path: /usr/bin/docker
Docker Root Dir: /docker
WARNING: No swap limit support
```
@anandkumarpatel and I are seeing this very frequently with `weave attach` containers
Seeing the same on a number of containers, on various ports, with and without volumes mounted. All are linked to at least one other running container. Docker info:
```
Client version: 1.6.0
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): 4749651
OS/Arch (client): linux/amd64
Server version: 1.6.0
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): 4749651
OS/Arch (server): linux/amd64
```
I got a similar issue with docker `1.8.0-dev`.
Below some useful info:
```
$ docker info
Containers: 0
Images: 18
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 18
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-55-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 1
Total Memory: 490 MiB
Name: consul-server
ID: 22WL:EEPB:YJIQ:LA4V:UHUO:PTZ3:CQYO:F7ED:3RMS:QV7O:LURE:J4W5
WARNING: No swap limit support
Experimental: true
$ docker version
Client:
Version: 1.8.0-dev
API version: 1.20
Go version: go1.4.2
Git commit: a2ea8f2
Built: Sat Jul 25 22:03:05 UTC 2015
OS/Arch: linux/amd64
Experimental: true
Server:
Version: 1.8.0-dev
API version: 1.20
Go version: go1.4.2
Git commit: a2ea8f2
Built: Sat Jul 25 22:03:05 UTC 2015
OS/Arch: linux/amd64
Experimental: true
```
And here the description of my issue:
I had a running consul container. It was started with this command:
```
docker run -d \
-p 8500:8500 \ -p 8300-8302:8300-8302/tcp \
-p 8300-8302:8300-8302/udp \
-h consul progrium/consul -server -bootstrap
```
I tried to remove it with:
```
$ docker rm -f 5515181938cf
```
And:
```
$ docker stop 5515181938cf || docker rm -f 5515181938cf
```
And:
```
$ docker kill 5515181938cf || docker rm -f 5515181938cf
```
But got the same issue:
```
Error response from daemon: Cannot destroy
container 5515181938cf: Driver aufs failed to remove root filesystem
5515181938cf937a979ffd8feefe3cb1e44793840bb59f87be48cf980c9a3076: rename
/var/lib/docker/aufs/diff/5515181938cf937a979ffd8feefe3cb1e44793840bb59f87be48cf980c9a3076
/var/lib/docker/aufs/diff/5515181938cf937a979ffd8feefe3cb1e44793840bb59f87be48cf980c9a3076-removing:
device or resource busy
Error: failed to remove containers: [5515181938cf]
```
Then I restarted the docker service:
```
$ service docker restart
```
After that the container was in the **Exited** state:
```
$ docker ps -a
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
5515181938cf progrium/consul "/bin/start -server -" About
an hour ago
Exited (137) 8 seconds ago 53/tcp,
0.0.0.0:8300-8302-|8300-8302/tcp, 8400/tcp, 53/udp,
0.0.0.0:8500-|8500/tcp, 0.0.0.0:8300-8302-|8300-8302/udp elegant_fermat
```
But I still couldn't remove it:
```
$ docker rm 5515181938cf
Error response from daemon: Cannot destroy container 5515181938cf: Driver aufs failed
to remove root filesystem
5515181938cf937a979ffd8feefe3cb1e44793840bb59f87be48cf980c9a3076: rename
/var/lib/docker/aufs/diff/5515181938cf937a979ffd8feefe3cb1e44793840bb59f87be48cf980c9a3076
/var/lib/docker/aufs/diff/5515181938cf937a979ffd8feefe3cb1e44793840bb59f87be48cf980c9a3076-removing: device or resource busy
Error: failed to remove containers: [5515181938cf]
```
Here is the content of this directory:
```
$ ls -al /var/lib/docker/aufs/diff/5515181938cf937a979ffd8feefe3cb1e44793840bb59f87be48cf980c9a3076
total 20
drwxr-xr-x 5 root root 4096 Jul 29 09:14 .
drwxr-xr-x 22 root root 4096 Jul 29 09:14 ..
drwxr-xr-x 2 root root 4096 Jul 29 09:14 data
-r--r--r-- 1 root root 0 Jul 29 09:14 .wh..wh.aufs
drwx------ 2 root root 4096 Jul 29 09:14 .wh..wh.orph
drwx------ 2 root root 4096 Jul 29 09:14 .wh..wh.plnk
```
`lsof` returned nothing:
```
$ lsof +D /var/lib/docker/aufs/diff/5515181938cf937a979ffd8feefe3cb1e44793840bb59f87be48cf980c9a3076
```
I also couldn't remove it manually:
```
$ rm -rf /var/lib/docker/aufs/diff/5515181938cf937a979ffd8feefe3cb1e44793840bb59f87be48cf980c9a3076
rm: cannot remove ‘/var/lib/docker/aufs/diff/5515181938cf937a979ffd8feefe3cb1e44793840bb59f87be48cf980c9a3076’: Device or resource busy
```
Then I found out that the consul process was still running:
```
$ pa aux | grep consul
root 30879 0.1 2.0 33960432 10160 ? Ssl 09:14 0:09 /bin/consul agent -config-dir=/config -server -bootstrap
```
After killing the process manually I could remove the directory and the container
```
$ service docker stop
$ kill 30879
$ rm -rf /var/lib/docker/aufs/diff/5515181938cf937a979ffd8feefe3cb1e44793840bb59f87be48cf980c9a3076
$ service docker start
docker start/running, process 31775
$ docker ps -a
CONTAINER ID
IMAGE COMMAND CREATED
STATUS PORTS
NAMES
5515181938cf progrium/consul "/bin/start -server -" About
an hour ago Dead 53/tcp, 0.0.0.0:8300-8302-|8300-8302/tcp, 0.0.0.0:8500-|8500/tcp, 0.0.0.0:8300-8302-|8300-8302/udp, 53/udp, 8400/tcp nostalgic_darwin
$ docker rm -f 5515181938cf
5515181938cf
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
Also seeing this with docker 1.6.0 (not the first time I've seen it).
(250, 512) todo (2250,)
(500, 512) todo (2000,)
(750, 512) todo (1750,)
(1000, 512) todo (1500,)
(1250, 512) todo (1250,)
(1500, 512) todo (1000,)
(1750, 512) todo (750,)
(2000, 512) todo (500,)
(2250, 512) todo (250,)
(2500, 512) todo (0,)
True class: left
Text with highlighted words
Rancher-Agent crashes devicemapper with docker daemon
**Rancher Version:** 1.1.0
**Docker Version:** 1.10.3
OS and where are the hosts located? (cloud, bare metal, etc): RHEl/7.2/vmware
Setup Details: (single node rancher vs. HA rancher, internal DB vs. external DB) single node with external DB
**Environment Type:** Cattle
**Steps to Reproduce:**
Docker environment using devicemapper as storage driver with loopback device.
**Docker info:**
Server Version: 1.10.3
Storage Driver: devicemapper
Pool Name: docker-253:0-1180431-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop2
Metadata file: /dev/loop3
Data Space Used: 581 MB
Data Space Total: 107.4 GB
Data Space Available: 19.81 GB
Metadata Space Used: 1.434 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.146 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.107-RHEL7 (2015-10-14)
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: null host bridge
Kernel Version:
3.10.0-327.4.4.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.2 (Maipo)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.26 GiB
**Results:**
**Containers went to unresponsive state.**
**Stopping or deleting container will end container in "dead" state.**
**Restarting docker daemon failing with below message.**
Sep 12 03:58:27 sv1lxdcim04 docker[7434]:
time="2016-09-12T03:58:27.172380678-07:00" level=warning msg="devmapper:
Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section."
Sep 12 03:58:27 sv1lxdcim04 docker[7434]:
time="2016-09-12T03:58:27.226281019-07:00" level=error
msg="[graphdriver] prior storage driver \"devicemapper\" failed:
devmapper: Base Device UUID and Filesystem verification
failed.devmapper: Failed to find uuid for device /dev/mapper/docker-253:2-5767170-base:exit status 2"
Sep 12 03:58:27 sv1lxdcim04 docker[7434]: time="2016-09-12T03:58:27.226350604-07:00"
level=fatal msg="Error starting daemon: error initializing graphdriver:
devmapper: Base Device UUID and Filesystem verification
failed.devmapper: Failed to find uuid for device
/dev/mapper/docker-253:2-5767170-base:exit status 2"
Sep 12 03:58:27 sv1lxdcim04 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Sep 12 03:58:27 sv1lxdcim04 systemd[1]: Failed to start Docker Application Container Engine.
cadvisor process still showing as zombie process, even after docker daemon is down.
Docker daemon running back only after cleaning devicemapper directory. which clears all the image and container data
**Expected:**
空cadvisor prevents docker from removing monitored containers?
Hi all, I have a problem using cadvisor on centos 7. When cadvisor is running, docker failes to remove other containers saying that the containers filesystem is busy. After stopping cadvisor is stopped container removal is working again.
I demostrated that in this gist: https://gist.github.com/cornelius-keller/0fd2d23b68ccd88c9328
I also included os version and docker info in the gist.
Thanks for reporting, @cornelius-keller
what cadvisor version are you running? Can you get host:port/validate for cadvisor?
Is this a temporary situation, or does the container fs stays busy till you delete cadvisor?
@rjnagal
Cadvisor version is:
|pre|
[root@583274-app35 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
docker.io/google/cadvisor latest 399ae3c46a0e 47 hours ago 19.89 MB
[root@583274-app35 ~]#
|/pre|
This is a permanent situation. The container fs stays busy untill I delete cadvisor.
What do you mean by getting host:port/validate for cadvisor? Cadvisor was still running and responsive on the web ui if that is what you mean. Unfortunately I can't give you any public host port to validate as cadvisor is only exposed via a vpn.
Yeah, I just need the ouput from /validate endpoint on cadvisor UI. You can
scrub any data that's private in there. Thanks
On Fri, Jun 12, 2015 at 9:54 AM, Cornelius Keller notifications@github.com
wrote:
| @rjnagal https://github.com/rjnagal
| Cadvisor version is:
|
| [root@583274-app35 ~]# docker images
| REPOSITORY
TAG IMAGE ID CREATED
VIRTUAL SIZEdocker.io/google/cadvisor latest
399ae3c46a0e 47 hours ago 19.89 MB
| [root@583274-app35 ~]#
|
| This is a permanent situation. The container fs stays busy untill I
delete
| cadvisor.
|
| What do you mean by getting host:port/validate for cadvisor? Cadvisor
was
| still running and responsive on the web ui if that is what you mean.
| Unfortunately I can't give you any public host port to validate as
cadvisor
| is only exposed via a vpn.
|
| —
| Reply to this email directly or view it on GitHub
| https://github.com/google/cadvisor/issues/771#issuecomment-111555689.
Sorry
was a long day, did not get that this was an endpoint. I added the
output to the gist.
I am facing this same issue. Essentially, running cadvisor with
`--volume=/:/rootfs:ro` causes other containers' devicemapper mounts to
be mounted inside the cadvisor container, so they can't be properly
destroyed when issuing `docker rm` on the
target container as they will appear in use.
How can this be solved?
When i run it on Fedora 21, it works fine. But when i run it on Ubuntu
14.04.2 LTS I get the same error as described above.
Error response from daemon: Cannot destroy container
xxx_jenkinsMaster_1230: Driver aufs failed to remove root filesystem
13b421d0458e740e42e5fa5ac1cb68f32638f0bc723d9ba16718955214d79b7d: rename
/var/lib/docker/aufs/mnt/13b421d0458e740e42e5fa5ac1cb68f32638f0bc723d9ba16718955214d79b7d
/var/lib/docker/aufs/mnt/13b421d0458e740e42e5fa5ac1cb68f32638f0bc723d9ba16718955214d79b7d-removing:
device or resource busy
The main difference is, that Ubuntu uses AUFS, where Fedora uses
Devicemapper. Maby thats the problem.
@rjnagal I can confirm that this issue happens on Ubuntu trusty x64 with
Doceker 1.8.1, cadvisor:latest and devicemapper.
'1cb6051b30a1' being the container ID.
```
# grep -l 1cb6051b30a1 /proc/*/mountinfo
/proc/1963/mountinfo
# ps aux | grep -i 1963
root 1963 1.9 0.8 588740 71688 ? Ssl Aug26 30:08 /usr/bin/cadvisor
root 14767 0.0 0.0 11744 952 pts/0 S+ 00:56 0:00 grep --color=auto -i 1963
```
Please suggest a workaround for this.
same here with CentOS + Docker 1.8.1(devicemapper)
Had to remove `--volume=/:/rootfs:ro` || `--volume=/var/lib/docker:/var/lib/docker:ro`
@rjnagal: Excepting disk usage calculation, cAdvisor does not poke at any
of these directories right?
On Fri, Aug 28, 2015 at 12:26 AM, Jihoon Chung notifications@github.com
wrote:
| same here with CentOS + Docker 1.8.1(devicemapper)
|
| Had to remove --volume=/:/rootfs:ro ||
| --volume=/var/lib/docker:/var/lib/docker:ro
|
| —
| Reply to this email directly or view it on GitHub
| https://github.com/google/cadvisor/issues/771#issuecomment-135661164.
Same problem here with Ubuntu 14.04.3.
@difro solution works but cadvisor can't provide docker stats anymore.
Any
workaround?
The last time I ran into this problem, I digged a little bit into the
cAdvisor source code. I'm not 100% sure - because it was a few weeks ago
- but this is essentially the gist:
If you use cAdvisor like it is shown in README.md you'll mount
`/var/lib/docker` as a volume into the container. This will [create dead
containers](https://github.com/docker/docker/issues/9665#issuecomment-137422559).
The reason, cAdvisor wants you to mount `/var/lib/docker` is - as far as I could see - only to display a certain info that is only interesting
for admins and should be known before hand.
We should be able to get all info from a `docker inspect` rather than
parsing the container config file. Seems like mounting `/var/lib/docker`
is causing more trouble than it's worth.
we also encounter the same problem (`cadvisor:lastest`, `ubuntu 14.04`)
any updates regarding this?
The best we can do for now is to let users optionally disable filesystem
usage metrics. We are waiting for some of the new upstream kernel features
to simplify disk accounting.
On Tue, Jan 26, 2016 at 2:51 PM, Sven Müller notifications@github.com
wrote:
| any updates regarding this?
|
| —
| Reply to this email directly or view it on GitHub
| https://github.com/google/cadvisor/issues/771#issuecomment-175277349.
Same situation.
My Docker Version is 1.9.1
Cadvisor version 0.18.0
And when docker rm container fails, the status of that container change to "dead" .
Is it possible to umount that specific mountpoint when container status changed to "exit" or "dead" ?
+1
cAdvisor doesn't mount anything. It runs `du` periodically to collect
filesystem stats. Other than that, it does not touch the container's
filesystem at all.
The easy fix for this would be to retry docker deletion or disable
filesystem aggregation in cadvisor.
On Wed, Feb 3, 2016 at 2:57 PM, Alex Rhea notifications@github.com wrote:
| +1
|
| —
| Reply to this email directly or view it on GitHub
| https://github.com/google/cadvisor/issues/771#issuecomment-179518025.
running cAdvisor without `--volume=/:/rootfs:ro` seems to fix it.
As pointed out in https://github.com/google/cadvisor/blob/master/docs/running.md
I haven't fully tested it yet, but works fine up to now
I had to remove the following volume mounts:
- /:/rootfs:ro
- /var/lib/docker/:/var/lib/docker:ro
Setup:
- Ubuntu 14.04.3 LTS
- docker 1.9.1 with aufs
- cAdvisor 0.20.5
Upgraded docker to 1.10.3 and now cAdvisor can only see the docker images, but no containers, if I only use volume mounts:
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
If I add `/:/rootfs:ro`, cAdvisor can see the containers, but I get `device or resource busy`, when trying to remove any container.
@xbglowx Are you using the latest cadvisor release?
Using `google/cadvisor:v0.22.0`
Any ideas or suggestions how can i dig inside the issue?
cc @timstclair
I was able to reproduce this locally with docker v1.9.1 and cAdvisor
0.22.0, but only right after starting cAdvisor and only once (removing a
second container works). I could not reproduce with docker v1.11.
Is this consistent with everyone else's experience?
With docker 1.11.1 the is issue is gone. With the latest fixes from docker part, seems working now.
I'm still able to reproduce
this with docker 1.11.1 and cAdvisor 0.23.0. Ubuntu 14.04.
@ashkop Can you try running cAdvisor with `--disable_metrics="tcp,disk"`
and see if that resolves the issue? Note that you will not get docker
container filesystem metrics by adding this flag.
If I try using `--disable_metrics="tcp,disk"` I get the following:
```
sudo docker run -ti -v /var/lib/docker/:/var/lib/docker:ro -v
/var/run:/var/run:rw -v /sys:/sys:ro -v /:/rootfs:ro google/cadvisor
--disable_metrics="tcp,disk"
panic: assignment to entry in nil map
goroutine 1 [running]:
panic(0xb0c8c0, 0xc8201c0440)
/usr/local/go/src/runtime/panic.go:481 +0x3e6
main.(*metricSetValue).Set(0x15ac528, 0x7ffe3cea1f59, 0x8, 0x0, 0x0)
/go/src/github.com/google/cadvisor/cadvisor.go:85 +0x1da
flag.(*FlagSet).parseOne(0xc82004e060, 0xc82005e901, 0x0, 0x0)
/usr/local/go/src/flag/flag.go:881 +0xdd9
flag.(*FlagSet).Parse(0xc82004e060, 0xc82000a100, 0x2, 0x2, 0x0, 0x0)
/usr/local/go/src/flag/flag.go:900 +0x6e
flag.Parse()
/usr/local/go/src/flag/flag.go:928 +0x6f
main.main()
/go/src/github.com/google/cadvisor/cadvisor.go:99 +0x68
```
This is with `cAdvisor version 0.23.0 (750f18e)`. Works fine with 0.22.0.
I still need to see if using `--disable_metrics="tcp,disk"` fixes the problem.
Yeah, that was fixed in https://github.com/google/cadvisor/pull/1259, but it's not integrated into any release.
@vishh Unfortunately the flag didn't help. As @xbglowx mentioned, this option causes 0.23.0 to crash, so I tried 0.22.0 and canary. Both still prevent me from removing containers. Here's the error message I get:
`Error response from daemon: Unable to remove filesystem for 9e96817fba0a443f75d1426b6d7a586f4bc84217b06eb021f6d28bae4f341473: remove /var/lib/docker/containers/9e96817fba0a443f75d1426b6d7a586f4bc84217b06eb021f6d28bae4f341473/shm: device or resource busy`
Same here on Debian 8, Docker 1.11.1 and latest cAdvisor.
@timstclair Can we make a v0.23.1 release with the fix for `--disable_metrics` flag?
I am experiencing the same issue with the following versions
"cAdvisor version: 0.23.0-750f18e"
google/cadvisor latest 5cda8139955b 8 days ago 48.92 MB
CentOS Linux release 7.2.1511 (Core)
Docker version 1.11.1, build 5604cbe
Work around was to remove /var/lib/docker from the shared volume.
@vishh Is this fixed if we just stopped tracking disk metrics for these
machines? Are there other dependencies?
@rjnagal Disk metrics should be the only dependency. Disabling that by
using `--disable_metrics=tcp,disk` should fix this issue.
Can we do that by default when we detect devicemapper?
@rjnagal AFAIK, it is not limited to devicemapper alone. AUFS is also
affected. If we need a default solution, we will have to disable
per-container disk metrics by default.
The issue persists in v0.23.1 on CentOS7, Docker 1.10.1, devicemapper
```
docker run \
--rm \
--volume=/var/run:/var/run:rw \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
google/cadvisor:v0.23.1 \
-docker_only \
--disable_metrics="tcp,disk"
```
To add more info - the issue persists on v0.23.1 and v0.23.2 on CentOS7,
Docker 1.11.1, devicemapper.
However the issue only
occurs when cadvisor is run from docker.
Running cadvisor directly on CentOS7 works without issues.
Could you add more details about your repro steps? How many containers
are you running, with what options? It would help if we could reproduce
from a clean VM centos image.
I tried to reproduce it on fresh VM, but failed. I'll try to find the
difference that is actually causing the issue. Meanwhile I did `lsof`
inside the `cadvisor` container of the file that is being blocked. Here's what I got:
```
1 /usr/bin/cadvisor pipe:[70918923]
1 /usr/bin/cadvisor pipe:[70918924]
1 /usr/bin/cadvisor pipe:[70918925]
1 /usr/bin/cadvisor socket:[70919220]
1 /usr/bin/cadvisor anon_inode:[eventpoll]
1 /usr/bin/cadvisor
anon_inode:inotify
1 /usr/bin/cadvisor socket:[70919240]
```
I also noticed that issue occurs only if I start `cadvisor` after my own
containers. If `cadvisor` is the first one started, then I can restart
my containers without any issue.
@ashkop That's actually correct. I tried to reproduce the error, but
couldn't. If the other containers are started first, only then cadvisor
blocks removal.
Here's a script to replicate the error on CentOS 7.
You will need a machine with an empty block device (just replace the path to the device in `DOCKER_DATA_DISK`) and it will setup docker with devicemapper through lvm's thin-pool, run a container, then cadvisor and then stop | rm the first container.
```
#!/bin/bash
DOCKER_DATA_DISK=/dev/vdb
set -exo pipefail
setenforce Permissive
yum update -y
yum install -y lvm2
systemctl enable lvm2-lvmetad
systemctl start lvm2-lvmetad
pvcreate $DOCKER_DATA_DISK
vgcreate data $DOCKER_DATA_DISK
lvcreate -l 100%free -T data/docker_thin
curl -sSL https://get.docker.com/ | sh
mkdir -p /etc/systemd/system/docker.service.d
cat ||EOF | /etc/systemd/system/docker.service.d/docker-lvm.conf
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// \
-s devicemapper \
--storage-opt dm.thinpooldev=/dev/mapper/data-docker_thin
TimeoutStartSec=3000
EOF
systemctl daemon-reload
systemctl enable docker
systemctl start docker
sleep 3
docker run \
--name=test \
-d \
debian:jessie \
/bin/sh -c "while true; do foo; sleep 1; done"
docker run \
-d \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:rw \
--volume=/sys:/sys:ro
\
--volume=/var/lib/docker/:/var/lib/docker:ro \
--name=cadvisor \
google/cadvisor:v0.23.1 \
-docker_only \
--disable_metrics="tcp,disk"
docker stop test
docker rm test
```
The output is:
```
... some data ...
+ docker stop test
test
+ docker rm test
Error response from daemon: Unable to remove filesystem for
7d7513b0c3310f26e7425728f9c34e219db53a5e4dbb6e0e4259c2e6eb760044: remove
/var/lib/docker/containers/7d7513b0c3310f26e7425728f9c34e219db53a5e4dbb6e0e4259c2e6eb760044/shm:
device or resource busy
```
On Ubuntu 14.04, using `--disable_metrics="tcp,disk"` still does not fix
the problem. I've confirmed @ashkop 's observation: If cAdvisor is
started after another container, then removing said container fails.
To get around this issue i have tried running cadvisor as
standalone..however it does not get data while i am using RHEL ,
cadvisor complains "unable to get fs usage from thin pool for device"..
it seems it cant get right information about the storage driver.
Using RHEL 7.1
version 0.23.3 (6607e7c)
docker 1.9.1
Anybody tried similar
This issue is hitting us often and affecting production container
deployments (Debian 8.5 hosts, Docker 1.11.1).
Can anyone spell out what we lose by omitting the `/:/rootfs:ro` mount?
Is it just disk usage metrics?
AFAIK, it should be just the disk usage metrics
On Tue, Jul 19, 2016 at 2:38 PM, Shane StClair notifications@github.com
wrote:
| This issue is hitting us often and affecting production container
| deployments (Debian 8.5 hosts, Docker 1.11.1).
|
| Can anyone spell out what we lose by omitting the /:/rootfs:ro mount? Is
| it just disk usage metrics?
|
| —
| You are receiving this because you were mentioned.
| Reply to this email directly, view it on GitHub
| https://github.com/google/cadvisor/issues/771#issuecomment-233774348,
| or mute the thread
| https://github.com/notifications/unsubscribe-auth/AGvIKN3e53lwmDwcVP7hDBloCHdfD_Dsks5qXUO_gaJpZM4FBIxe
| .
So, it is possible to `stop` cadvisor before `stop/start` any other containers and then `start` cadvisor again?
cadvisor should be the first container to start.
One should not have to worry about starting/stopping containers in order to properly run cAdvisor. Monitoring should have no affect on the running of containers.
100% agreed. I'm a telling that to workaround the issue you can start cAdvisor before other containers.
But once Cadvisor is monitoring the other containers you are not able to
remove one of the monitored ones until you remove Cadvisor, at least that
happened to me a lot until now that I stop Cadvisor update containers and
start it again. I am doing this wrong?
2016-07-24 13:25 GMT-03:00 Alex notifications@github.com:
| 100% agreed. I'm a telling that to workaround the issue you can start
| cAdvisor before other containers.
|
| —
| You are receiving this because you commented.
| Reply to this email directly, view it on GitHub
| https://github.com/google/cadvisor/issues/771#issuecomment-234786522,
| or mute the thread
| https://github.com/notifications/unsubscribe-auth/AACgalxt67a6n9U7PMSW2nyKXJRWjjP5ks5qY5IBgaJpZM4FBIxe
| .
##
Alvaro
i ran into this issue as well. removing the /:/rootfs:ro volume works around the issue for me, but i do lose some stats... looks like network inside the containers, and process lists... maybe others that i haven't noticed right off.
Docker 1.11.2, cAdvisor 0.23.1
i am able to confirm that having cAdvisor loaded first, any containers
loaded after are able to be removed without the 'device or resource
busy' error
Wanted to add that removing the root volume (`/:/rootfs:ro`) did **not**
solve this issue for us. We ended up removing cadvisor from our
deployment ecosystem until this issue is resolved as it was causing too
much pain in our deployment scheme.
upgrading to Docker 1.12 also did not make any difference
Removing the root volume did not make any difference for me. Still blocking containers from removal.
docker 1.11.1
Going to remove cAdvisor from all systems as it is blocking my deployment.
We have this issue too in our production environment.
It's very frustrating because it blocks our upgrade process.
We use Debian 8, Docker 1.10.3 and cadvisor 0.23.2.
I've opted for put down cadvisor while deploying/removing containers and
then put it up again. Not liked much but works.
2016-08-03 17:09 GMT-03:00 Chad McElligott notifications@github.com:
| upgrading to Docker 1.12 also did not make any difference
|
| —
| You are receiving this because you commented.
| Reply to
this email directly, view it on GitHub
| https://github.com/google/cadvisor/issues/771#issuecomment-237356597,
| or mute the thread
|
https://github.com/notifications/unsubscribe-auth/AACgatIXetobsMlE0PJR7LKsnGO454m2ks5qcPWRgaJpZM4FBIxe
|
.
##
Alvaro
I ended up just running cAdvisor on the host instead of as a container,
and it is working well that way.
Hello Chadxz,
Did you build the latest release or the master? Because the current
master branch has issues I have the impression, or I'm doing something
wrong.
`../../../golang.org/x/oauth2/jws/jws.go:67:17: error: reference to undefined identifier ‘base64.RawURLEncoding’
return base64.RawURLEncoding.EncodeToString(b), nil
^
../../../golang.org/x/oauth2/jws/jws.go:85:16: error: reference to undefined identifier ‘base64.RawURLEncoding’
return base64.RawURLEncoding.EncodeToString(b), nil
^
../../../golang.org/x/oauth2/jws/jws.go:105:16: error: reference to undefined identifier ‘base64.RawURLEncoding’
return base64.RawURLEncoding.EncodeToString(b), nil
^
../../../golang.org/x/oauth2/jws/jws.go:116:25: error: reference to undefined identifier ‘base64.RawURLEncoding’
decoded, err := base64.RawURLEncoding.DecodeString(s[1])
^
`
and it goes on like that.
We too removed cadvisor from our dev systems as it created 'dead' containers when we tried to remove others.
Evert, to which tool have you moved to monitor containers?
2016-08-05 10:17 GMT-03:00 EvertMDC notifications@github.com:
| Hello Chadxz,
|
| Did you build the latest release or the master? Because the current master
| branch has issues I have the impression, or I'm doing something wrong.
|
| ../../../golang.org/x/oauth2/jws/jws.go:67:17: error: reference to
| undefined identifier ‘base64.RawURLEncoding’
| return base64.RawURLEncoding.EncodeToString(b), nil
| ^
| ../../../golang.org/x/oauth2/jws/jws.go:85:16: error: reference to
| undefined identifier ‘base64.RawURLEncoding’
| return base64.RawURLEncoding.EncodeToString(b), nil
| ^
| ../../../golang.org/x/oauth2/jws/jws.go:105:16: error: reference to
| undefined identifier ‘base64.RawURLEncoding’
| return base64.RawURLEncoding.EncodeToString(b), nil
| ^
| ../../../golang.org/x/oauth2/jws/jws.go:116:25: error: reference to
| undefined identifier ‘base64.RawURLEncoding’
| decoded, err := base64.RawURLEncoding.DecodeString(s[1])
| ^
|
| and it goes on like that.
|
| We too removed cadvisor from our dev systems as it created 'dead'
| containers when we tried to remove others.
|
| —
| You are receiving this because you commented.
| Reply to this email directly, view it on GitHub
| https://github.com/google/cadvisor/issues/771#issuecomment-237847659,
| or mute the thread
| https://github.com/notifications/unsubscribe-auth/AACgarT3UVvwHModKzo533_6tykLHwPZks5qczf_gaJpZM4FBIxe
| .
##
Alvaro
@EvertMDC I downloaded the prebuilt binary from the latest stable release on the releases
page https://github.com/google/cadvisor/releases/tag/v0.23.2
Thanks @chadxz . I overlooked that.
Hello @zevarito . None at the moment but I have used the container
exporter images and they worked fine. They suggested to use cadvisor
however as they are no longer maintaining it.
Going to run it on the system itself now as Chadxz suggested and see how it goes.
https://github.com/docker-infra/container_exporter
Hi Evert,
I have all with Cadvisor, but I will stop monitoring the Host itself with
Cadvisor and just monitor the containers. For the Host I think Node
Exporter should be the safest bet.
2016-08-05 11:19 GMT-03:00 EvertMDC notifications@github.com:
| Thanks @chadxz https://github.com/chadxz . I overlooked that.
|
| Hello @zevarito https://github.com/zevarito
. None at the moment but I
| have used the container exporter images and they worked fine. They
| suggested to use cadvisor however as they are no longer maintaining
it.
| Going to run it on the system itself now as Chadxz suggested and see
how
| it goes.
|
| https://github.com/docker-infra/container_exporter
|
| —
| You are receiving this because you were mentioned.
| Reply to this email directly, view it on GitHub
| https://github.com/google/cadvisor/issues/771#issuecomment-237862836,
| or mute the thread
|
https://github.com/notifications/unsubscribe-auth/AACgahA5bHNx71HVm7ZWjAMOsG9OPY7sks5qc0Z3gaJpZM4FBIxe
|
.
##
Alvaro
:+1: to node exporter plus cadvisor. That's what we use | very happy
with that combo.
Hey @chadxz, have you continued to have success running cadvisor on the
host directly?
I am facing the same problem production when running cadvisor in a
container. It's hard to validate in a short period of time whether the
baremetal approach will work to fix this bug when it appears so rarely.
@jaybennett89 cadvisor running on host has been working fine. The
"Unable to remove filesystem" error never occurs.
(250, 512) todo (2250,)
(500, 512) todo (2000,)
(750, 512) todo (1750,)
(1000, 512) todo (1500,)
(1250, 512) todo (1250,)
(1500, 512) todo (1000,)
(1750, 512) todo (750,)
(2000, 512) todo (500,)
(2250, 512) todo (250,)
(2500, 512) todo (0,)
True class: unrelated
Text with highlighted words
Our
server was running out of disk space due to some docker diff files
piling up (it seems they need to be cleaned out regularly, but that is a
different topic). After making some space, `ps:rebuild` is not working anymore, it just returns immediately with no output. And we can also not push anymore to the server, we get:
```
! [remote rejected] dokku -| master (pre-receive hook declined)
error: failed to push some refs to 'dokku@.....:.....'
```
Help is greatly appreciated.
dokku report
-----| uname: Linux ubuntu-512mb-fra1-01 3.13.0-77-generic #121-Ubuntu SMP Wed Jan 20 10:50:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
-----| memory:
total used free shared buffers cached
Mem: 994 647 346 10 19 111
-/+ buffers/cache: 515 478
Swap: 999 112 887
-----| docker version:
Client:
Version: 17.05.0-ce
API version: 1.29
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:06:06 2017
OS/Arch: linux/amd64
Server:
Version: 17.05.0-ce
API version: 1.29 (minimum version 1.12)
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:06:06 2017
OS/Arch: linux/amd64
Experimental: false
-----| docker daemon info:
Containers: 12
Running: 5
Paused: 0
Stopped: 7
Images: 32
Server Version: 17.05.0-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 1133
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
apparmor
Kernel Version: 3.13.0-77-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 994MiB
Name: ubuntu-512mb-fra1-01
ID: 5XQF:UC63:3ZUV:GI2Y:JEWP:NJHA:NQ6X:5IKC:RRFA:XLTA:IILX:HBUZ
Docker Root Dir: /var/lib/docker
Debug Mode (client): true
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
-----| sigil version: 0.4.0
-----| herokuish version:
herokuish: 0.3.31
buildpacks:
heroku-buildpack-multi v1.0.0
heroku-buildpack-ruby v163
heroku-buildpack-nodejs v99
heroku-buildpack-clojure v76
heroku-buildpack-python v109
heroku-buildpack-java v52
heroku-buildpack-gradle v22
heroku-buildpack-grails v21
heroku-buildpack-scala v76
heroku-buildpack-play v26
heroku-buildpack-php v121
heroku-buildpack-go v69
heroku-buildpack-erlang fa17af9
buildpack-nginx v8
-----| dokku version: 0.5.7
-----| dokku plugins:
plugn: 0.3.0
00_dokku-standard 0.5.7 enabled dokku core standard plugin
20_events 0.5.7 enabled dokku core events logging plugin
apps 0.5.7 enabled dokku core apps plugin
build-env 0.5.7 enabled dokku core build-env plugin
certs 0.5.7 enabled dokku core certificate management plugin
checks 0.5.7 enabled dokku core checks plugin
common 0.5.7 enabled dokku core common plugin
config 0.5.7 enabled dokku core config plugin
docker-options 0.5.7 enabled dokku core docker-options plugin
domains 0.5.7 enabled dokku core domains plugin
enter 0.5.7 enabled dokku core enter plugin
git 0.5.7 enabled dokku core git plugin
letsencrypt 0.8.6 enabled Automated installation of let's encrypt TLS certificates
logs 0.5.7 enabled dokku core logs plugin
memcached 1.0.0 enabled dokku memcached service plugin
named-containers 0.5.7 enabled dokku core named containers plugin
nginx-vhosts 0.5.7 enabled dokku core nginx-vhosts plugin
plugin 0.5.7 enabled dokku core plugin plugin
postgres 1.0.0 enabled dokku postgres service plugin
proxy 0.5.7 enabled dokku core proxy plugin
ps 0.5.7 enabled dokku core ps plugin
shell 0.5.7 enabled dokku core shell plugin
storage 0.5.7 enabled dokku core storage plugin
tags 0.5.7 enabled dokku core tags plugin
tar 0.5.7 enabled dokku core tar plugin
Environment details (AWS, VirtualBox, physical, etc.): Digital Ocean
+Can you turn on trace mode and try deploying?
Okay, this is what I get when pushing:
```
+ case "$(lsb_release -si)" in
++ lsb_release -si
+ export DOKKU_DISTRO=ubuntu
+ DOKKU_DISTRO=ubuntu
+ export DOKKU_IMAGE=gliderlabs/herokuish
+ DOKKU_IMAGE=gliderlabs/herokuish
+ export DOKKU_LIB_ROOT=/var/lib/dokku
+ DOKKU_LIB_ROOT=/var/lib/dokku
+ export PLUGIN_PATH=/var/lib/dokku/plugins
+ PLUGIN_PATH=/var/lib/dokku/plugins
+ export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ export DOKKU_API_VERSION=1
+ DOKKU_API_VERSION=1
+ export DOKKU_NOT_IMPLEMENTED_EXIT=10
+ DOKKU_NOT_IMPLEMENTED_EXIT=10
+ export DOKKU_VALID_EXIT=0
+ DOKKU_VALID_EXIT=0
+ export DOKKU_LOGS_DIR=/var/log/dokku
+ DOKKU_LOGS_DIR=/var/log/dokku
+ export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ export DOKKU_CONTAINER_LABEL=dokku
+ DOKKU_CONTAINER_LABEL=dokku
+ export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ parse_args git-receive-pack ''\''myapp'\'''
+ declare 'desc=top-level cli arg parser'
+ local next_index=1
+ local skip=false
+ args=("$@")
+ local args
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=2
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=3
+ return 0
+ args=("$@")
+ [[ git-receive-pack =~ ^--.* ]]
+ has_tty
+ declare 'desc=return 0 if we have a tty'
++ /usr/bin/tty
++ true
+ [[ not a tty == \n\o\t\ \a\ \t\t\y ]]
+ return 1
+ DOKKU_QUIET_OUTPUT=1
++ id -un
+ [[ dokku != \d\o\k\k\u ]]
++ id -un
+ [[ dokku != \r\o\o\t ]]
+ [[ git-receive-pack =~ ^plugin:.* ]]
+ [[ -n git-receive-pack 'myapp' ]]
+ export -n SSH_ORIGINAL_COMMAND
+ [[ git-receive-pack =~ config-* ]]
+ [[ git-receive-pack =~ docker-options* ]]
+ set -f
+ /usr/local/bin/dokku git-receive-pack ''\''myapp'\'''
+ case "$(lsb_release -si)" in
++ lsb_release -si
+ export DOKKU_DISTRO=ubuntu
+ DOKKU_DISTRO=ubuntu
+ export DOKKU_IMAGE=gliderlabs/herokuish
+ DOKKU_IMAGE=gliderlabs/herokuish
+ export DOKKU_LIB_ROOT=/var/lib/dokku
+ DOKKU_LIB_ROOT=/var/lib/dokku
+ export PLUGIN_PATH=/var/lib/dokku/plugins
+ PLUGIN_PATH=/var/lib/dokku/plugins
+ export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ export DOKKU_API_VERSION=1
+ DOKKU_API_VERSION=1
+ export DOKKU_NOT_IMPLEMENTED_EXIT=10
+ DOKKU_NOT_IMPLEMENTED_EXIT=10
+ export DOKKU_VALID_EXIT=0
+ DOKKU_VALID_EXIT=0
+ export DOKKU_LOGS_DIR=/var/log/dokku
+ DOKKU_LOGS_DIR=/var/log/dokku
+ export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ export DOKKU_CONTAINER_LABEL=dokku
+ DOKKU_CONTAINER_LABEL=dokku
+ export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ parse_args git-receive-pack ''\''myapp'\'''
+ declare 'desc=top-level cli arg parser'
+ local next_index=1
+ local skip=false
+ args=("$@")
+ local args
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=2
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=3
+ return 0
+ args=("$@")
+ [[ git-receive-pack =~ ^--.* ]]
+ has_tty
+ declare 'desc=return 0 if we have a tty'
++ /usr/bin/tty
++ true
+ [[ not a tty == \n\o\t\ \a\ \t\t\y ]]
+ return 1
+ DOKKU_QUIET_OUTPUT=1
++ id -un
+ [[ dokku != \d\o\k\k\u ]]
++ id -un
+ [[ dokku != \r\o\o\t ]]
+ [[ git-receive-pack =~ ^plugin:.* ]]
+ [[ -n '' ]]
+ dokku_auth git-receive-pack ''\''myapp'\'''
+ declare 'desc=calls user-auth plugin trigger'
+ export SSH_USER=dokku
+ SSH_USER=dokku
+ export 'SSH_NAME=[nico]'
+ SSH_NAME='[nico]'
+ plugn trigger user-auth dokku '[nico]' git-receive-pack ''\''myapp'\'''
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ [[ ! -n '' ]]
+ return 0
+ case "$1" in
+ execute_dokku_cmd git-receive-pack ''\''myapp'\'''
+ declare 'desc=executes dokku sub-commands'
+ local PLUGIN_NAME=git-receive-pack
+ local PLUGIN_CMD=git-receive-pack
+ local implemented=0
+ local script
+ argv=("$@")
+ local argv
+ case "$PLUGIN_NAME" in
++ readlink -f /var/lib/dokku/plugins/enabled/git-receive-pack
+ [[ /var/lib/dokku/plugins/enabled/git-receive-pack == *core-plugins* ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-receive-pack/subcommands/default ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-receive-pack/subcommands/git-receive-pack ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-receive-pack/subcommands/git-receive-pack ]]
+ [[ 0 -eq 0 ]]
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/00_dokku-standard/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/20_events/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/apps/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/certs/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/checks/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/config/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/docker-options/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/domains/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/enter/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/git/commands git-receive-pack ''\''myapp'\'''
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]++ set -x
+ source /var/lib/dokku/plugins/available/apps/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
++ source /var/lib/dokku/core-plugins/available/common/functions
+++ set -eo pipefail+++ [[ -n 1 ]]
+++ set -x
++ source /var/lib/dokku/plugins/available/config/functions
+++ set -eo pipefail
+++ [[ -n 1 ]]
+++ set -x
+++ source /var/lib/dokku/core-plugins/available/common/functions
++++ set -eo pipefail
++++ [[ -n 1 ]]
++++ set -x
+ source /var/lib/dokku/plugins/available/config/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
++ source /var/lib/dokku/core-plugins/available/common/functions
+++ set -eo pipefail+++ [[ -n 1 ]]
+++ set -x
+ case "$1" in
+ git_glob_cmd git-receive-pack ''\''myapp'\'''
+ declare 'desc=catch-all for any other git-* commands'
+ local 'cmd=git-*'
++ sed 's/^\///g'
++ sed 's/\\'\''/'\''/g'
++ perl -pe 's/(?|!\\)'\''//g'
++ echo ''\''myapp'\'''
+ local APP=myapp
+ local APP_PATH=/home/dokku/myapp
+ [[ git-receive-pack == \g\i\t\-\r\e\c\e\i\v\e\-\p\a\c\k ]]
+ [[ ! -d /home/dokku/myapp/refs ]]
+ [[ git-receive-pack == \g\i\t\-\r\e\c\e\i\v\e\-\p\a\c\k ]]
+ local 'args=git-receive-pack '\''/home/dokku/myapp'\'''
+ git-shell -c 'git-receive-pack '\''/home/dokku/myapp'\'''
Total 0 (delta 0), reused 0 (delta 0)
remote: + case "$(lsb_release -si)" in
remote: ++ lsb_release -si
remote: + export DOKKU_DISTRO=ubuntu
remote: + DOKKU_DISTRO=ubuntu
remote: + export DOKKU_IMAGE=gliderlabs/herokuish
remote: + DOKKU_IMAGE=gliderlabs/herokuish
remote: + export DOKKU_LIB_ROOT=/var/lib/dokku
remote: + DOKKU_LIB_ROOT=/var/lib/dokku
remote: + export PLUGIN_PATH=/var/lib/dokku/plugins
remote: + PLUGIN_PATH=/var/lib/dokku/plugins
remote: + export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
remote: + PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
remote: + export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
remote: + PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
remote: + export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
remote: + PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
remote: + export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
remote: + PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
remote: + export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
remote: + PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
remote: + export DOKKU_API_VERSION=1
remote: + DOKKU_API_VERSION=1
remote: + export DOKKU_NOT_IMPLEMENTED_EXIT=10
remote: + DOKKU_NOT_IMPLEMENTED_EXIT=10
remote: + export DOKKU_VALID_EXIT=0
remote: + DOKKU_VALID_EXIT=0
remote: + export DOKKU_LOGS_DIR=/var/log/dokku
remote: + DOKKU_LOGS_DIR=/var/log/dokku
remote: + export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
remote: + DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
remote: + export DOKKU_CONTAINER_LABEL=dokku
remote: + DOKKU_CONTAINER_LABEL=dokku
remote: + export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
remote: + DOKKU_GLOBAL_RUN_ARGS=--label=dokku
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + parse_args git-hook myapp
remote: + declare 'desc=top-level cli arg parser'
remote: + local next_index=1
remote: + local skip=false
remote: + args=("$@")
remote: + local args
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=2
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=3
remote: + return 0
remote: + args=("$@")
remote: + [[ git-hook =~ ^--.* ]]
remote: + has_tty
remote: + declare 'desc=return 0 if we have a tty'
remote: ++ /usr/bin/tty
remote: ++ true
remote: + [[ not a tty == \n\o\t\ \a\ \t\t\y ]]
remote: + return 1
remote: + DOKKU_QUIET_OUTPUT=1
remote: ++ id -un
remote: + [[ dokku != \d\o\k\k\u ]]
remote: ++ id -un
remote: + [[ dokku != \r\o\o\t ]]
remote: + [[ git-hook =~ ^plugin:.* ]]
remote: + [[ -n '' ]]
remote: + dokku_auth git-hook myapp
remote: + declare 'desc=calls user-auth plugin trigger'
remote: + export SSH_USER=dokku
remote: + SSH_USER=dokku
remote: + export 'SSH_NAME=[nico]'
remote: + SSH_NAME='[nico]'
remote: + plugn trigger user-auth dokku '[nico]' git-hook myapp
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + [[ ! -n '' ]]
remote: + return 0
remote: + case "$1" inremote: + execute_dokku_cmd git-hook myappremote: + declare 'desc=executes dokku sub-commands'
remote: + local PLUGIN_NAME=git-hook
remote: + local PLUGIN_CMD=git-hook
remote: + local implemented=0
remote: + local script
remote: + argv=("$@")
remote: + local argv
remote: + case "$PLUGIN_NAME" in
remote: ++ readlink -f /var/lib/dokku/plugins/enabled/git-hook
remote: + [[ /var/lib/dokku/plugins/enabled/git-hook == *core-plugins* ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-hook/subcommands/default ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-hook/subcommands/git-hook ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-hook/subcommands/git-hook ]]
remote: + [[ 0 -eq 0 ]]
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/00_dokku-standard/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/20_events/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/apps/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/certs/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/checks/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/config/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/docker-options/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/domains/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/enter/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/git/commands git-hook myapp
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + source /var/lib/dokku/plugins/available/apps/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: ++ source /var/lib/dokku/core-plugins/available/common/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: ++ source /var/lib/dokku/plugins/available/config/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: +++ source /var/lib/dokku/core-plugins/available/common/functions
remote: ++++ set -eo pipefail
remote: ++++ [[ -n 1 ]]
remote: ++++ set -x
remote: + source /var/lib/dokku/plugins/available/config/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: ++ source /var/lib/dokku/core-plugins/available/common/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: + case "$1" in
remote: + git_hook_cmd git-hook myapp
remote: + declare 'desc=kick off receive-app trigger from git prereceive hook'
remote: + local cmd=git-hook
remote: + local APP=myapp
remote: + local oldrev newrev refname
remote: + read -r oldrev newrev refname
remote: + [[ refs/heads/master = \r\e\f\s\/\h\e\a\d\s\/\m\a\s\t\e\r ]]
remote: + plugn trigger receive-app myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + [[ ! -n '' ]]
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + git_receive_app myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=git receive-app plugin trigger'
remote: + local trigger=git_receive_app
remote: + local APP=myapp
remote: + local REV=b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + [[ ! -d /home/dokku/myapp/refs ]]
remote: + dokku git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + case "$(lsb_release -si)" in
remote: ++ lsb_release -si
remote: + export DOKKU_DISTRO=ubuntu
remote: + DOKKU_DISTRO=ubuntu
remote: + export DOKKU_IMAGE=gliderlabs/herokuish
remote: + DOKKU_IMAGE=gliderlabs/herokuish
remote: + export DOKKU_LIB_ROOT=/var/lib/dokku
remote: + DOKKU_LIB_ROOT=/var/lib/dokku
remote: + export PLUGIN_PATH=/var/lib/dokku/plugins
remote: + PLUGIN_PATH=/var/lib/dokku/plugins
remote: + export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
remote: + PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
remote: + export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
remote: + PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
remote: + export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
remote: + PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
remote: + export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
remote: + PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
remote: + export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
remote: + PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
remote: + export DOKKU_API_VERSION=1
remote: + DOKKU_API_VERSION=1
remote: + export DOKKU_NOT_IMPLEMENTED_EXIT=10
remote: + DOKKU_NOT_IMPLEMENTED_EXIT=10
remote: + export DOKKU_VALID_EXIT=0
remote: + DOKKU_VALID_EXIT=0
remote: + export DOKKU_LOGS_DIR=/var/log/dokku
remote: + DOKKU_LOGS_DIR=/var/log/dokku
remote: + export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
remote: + DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
remote: + export DOKKU_CONTAINER_LABEL=dokku
remote: + DOKKU_CONTAINER_LABEL=dokku
remote: + export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
remote: + DOKKU_GLOBAL_RUN_ARGS=--label=dokku
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + parse_args git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=top-level cli arg parser'
remote: + local next_index=1
remote: + local skip=false
remote: + args=("$@")
remote: + local args
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=2
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=3
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=4
remote: + return 0
remote: + args=("$@")
remote: + [[ git-build =~ ^--.* ]]
remote: + has_tty
remote: + declare 'desc=return 0 if we have a tty'
remote: ++ /usr/bin/tty
remote: ++ true
remote: + [[ not a tty == \n\o\t\ \a\ \t\t\y ]]
remote: + return 1
remote: + DOKKU_QUIET_OUTPUT=1
remote: ++ id -un
remote: + [[ dokku != \d\o\k\k\u ]]
remote: ++ id -un
remote: + [[ dokku != \r\o\o\t ]]
remote: + [[ git-build =~ ^plugin:.* ]]
remote: + [[ -n '' ]]
remote: + dokku_auth git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=calls user-auth plugin trigger'
remote: + export SSH_USER=dokku
remote: + SSH_USER=dokku
remote: + export 'SSH_NAME=[nico]'
remote: + SSH_NAME='[nico]'
remote: + plugn trigger user-auth dokku '[nico]' git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + [[ ! -n '' ]]
remote: + return 0
remote: + case "$1" in
remote: + execute_dokku_cmd git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=executes dokku sub-commands'
remote: + local PLUGIN_NAME=git-build
remote: + local PLUGIN_CMD=git-build
remote: + local implemented=0
remote: + local script
remote: + argv=("$@")
remote: + local argv
remote: + case "$PLUGIN_NAME" in
remote: ++ readlink -f /var/lib/dokku/plugins/enabled/git-build
remote: + [[ /var/lib/dokku/plugins/enabled/git-build == *core-plugins* ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-build/subcommands/default ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-build/subcommands/git-build ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-build/subcommands/git-build ]]
remote: + [[ 0 -eq 0 ]]
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/00_dokku-standard/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/20_events/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/apps/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/certs/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/checks/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/config/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/docker-options/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/domains/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/enter/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/git/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + source /var/lib/dokku/plugins/available/apps/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: ++ source /var/lib/dokku/core-plugins/available/common/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: ++ source /var/lib/dokku/plugins/available/config/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: +++ source /var/lib/dokku/core-plugins/available/common/functions
remote: ++++ set -eo pipefail
remote: ++++ [[ -n 1 ]]
remote: ++++ set -x
remote: + source /var/lib/dokku/plugins/available/config/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: ++ source /var/lib/dokku/core-plugins/available/common/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: + case "$1" in
remote: + git_build_cmd git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=lock git-build'
remote: + local cmd=git-build
remote: + local APP=myapp
remote: + local APP_BUILD_LOCK=/home/dokku/myapp/.build.lock
remote: + local 'APP_BUILD_LOCK_MSG=myapp is currently being deployed or locked. Waiting...'
remote: ++ flock -n /home/dokku/myapp/.build.lock true
remote: ++ echo 0
remote: + [[ 0 -ne 0 ]]
remote: + shift 1
remote: + flock -o /home/dokku/myapp/.build.lock dokku git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + case "$(lsb_release -si)" in
remote: ++ lsb_release -si
remote: + export DOKKU_DISTRO=ubuntu
remote: + DOKKU_DISTRO=ubuntu
remote: + export DOKKU_IMAGE=gliderlabs/herokuish
remote: + DOKKU_IMAGE=gliderlabs/herokuish
remote: + export DOKKU_LIB_ROOT=/var/lib/dokku
remote: + DOKKU_LIB_ROOT=/var/lib/dokku
remote: + export PLUGIN_PATH=/var/lib/dokku/plugins
remote: + PLUGIN_PATH=/var/lib/dokku/plugins
remote: + export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
remote: + PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
remote: + export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabledremote: + PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
remote: + export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
remote: + PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
remote: + export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
remote: + PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
remote: + export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
remote: + PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
remote: + export DOKKU_API_VERSION=1
remote: + DOKKU_API_VERSION=1
remote: + export DOKKU_NOT_IMPLEMENTED_EXIT=10
remote: + DOKKU_NOT_IMPLEMENTED_EXIT=10
remote: + export DOKKU_VALID_EXIT=0remote: + DOKKU_VALID_EXIT=0
remote: + export DOKKU_LOGS_DIR=/var/log/dokku
remote: + DOKKU_LOGS_DIR=/var/log/dokku
remote: + export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
remote: + DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
remote: + export DOKKU_CONTAINER_LABEL=dokku
remote: + DOKKU_CONTAINER_LABEL=dokku
remote: + export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
remote: + DOKKU_GLOBAL_RUN_ARGS=--label=dokku
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + parse_args git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=top-level cli arg parser'
remote: + local next_index=1
remote: + local skip=false
remote: + args=("$@")
remote: + local args
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=2
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=3
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=4
remote: + return 0
remote: + args=("$@")
remote: + [[ git-build-locked =~ ^--.* ]]
remote: + has_tty
remote: + declare 'desc=return 0 if we have a tty'
remote: ++ /usr/bin/tty
remote: ++ true
remote: + [[ not a tty == \n\o\t\ \a\ \t\t\y ]]
remote: + return 1
remote: + DOKKU_QUIET_OUTPUT=1
remote: ++ id -un
remote: + [[ dokku != \d\o\k\k\u ]]
remote: ++ id -un
remote: + [[ dokku != \r\o\o\t ]]
remote: + [[ git-build-locked =~ ^plugin:.* ]]
remote: + [[ -n '' ]]
remote: + dokku_auth git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=calls user-auth plugin trigger'
remote: + export SSH_USER=dokku
remote: + SSH_USER=dokku
remote: + export 'SSH_NAME=[nico]'
remote: + SSH_NAME='[nico]'
remote: + plugn trigger user-auth dokku '[nico]' git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + [[ ! -n '' ]]
remote: + return 0
remote: + case "$1" in
remote: + execute_dokku_cmd git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=executes dokku sub-commands'
remote: + local PLUGIN_NAME=git-build-locked
remote: + local PLUGIN_CMD=git-build-locked
remote: + local implemented=0
remote: + local script
remote: + argv=("$@")
remote: + local argv
remote: + case "$PLUGIN_NAME" in
remote: ++ readlink -f /var/lib/dokku/plugins/enabled/git-build-locked
remote: + [[ /var/lib/dokku/plugins/enabled/git-build-locked == *core-plugins* ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-build-locked/subcommands/default ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-build-locked/subcommands/git-build-locked ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-build-locked/subcommands/git-build-locked ]]
remote: + [[ 0 -eq 0 ]]
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/00_dokku-standard/commands git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/20_events/commands git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/apps/commands git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/certs/commands git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/checks/commands git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/config/commands git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/docker-options/commands git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/domains/commands git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/enter/commands git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/git/commands git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + source /var/lib/dokku/plugins/available/apps/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: ++ source /var/lib/dokku/core-plugins/available/common/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: ++ source /var/lib/dokku/plugins/available/config/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: +++ source /var/lib/dokku/core-plugins/available/common/functions
remote: ++++ set -eo pipefail
remote: ++++ [[ -n 1 ]]
remote: ++++ set -x
remote: + source /var/lib/dokku/plugins/available/config/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: ++ source /var/lib/dokku/core-plugins/available/common/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: + case "$1" in
remote: + git_build_locked_cmd git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=setup and call git_build_app_repo'
remote: + local cmd=git-build-locked
remote: + local APP=myapp
remote: + [[ 3 -ge 3 ]]
remote: + local REF=b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + git_build_app_repo myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=builds local git app repo for app'
remote: + verify_app_name myapp
remote: + declare 'desc=verify app name format and app existence'
remote: + local APP=myapp
remote: + [[ ! -n myapp ]]
remote: + [[ ! myapp =~ ^[a-z].* ]]
remote: + [[ ! -d /home/dokku/myapp ]]
remote: + return 0
remote: + local APP=myapp
remote: + local REV=b4287d41e948d5bd423edd1d5528d86784678d8c
remote: ++ mktemp -d /tmp/dokku_git.XXXX
remote: + local GIT_BUILD_APP_REPO_TMP_WORK_DIR=/tmp/dokku_git.kXDG
remote: + trap 'rm -rf "$GIT_BUILD_APP_REPO_TMP_WORK_DIR" | /dev/null' RETURN INT TERM EXIT
remote: + local TMP_TAG=dokku/b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + chmod 755 /tmp/dokku_git.kXDG
remote: + unset GIT_DIR GIT_WORK_TREE
remote: + pushd /tmp/dokku_git.kXDG
remote: + [[ ! -d /home/dokku/myapp ]]
remote: + GIT_DIR=/home/dokku/myapp
remote: + git tag -d dokku/b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + GIT_DIR=/home/dokku/myapp
remote: + git tag dokku/b4287d41e948d5bd423edd1d5528d86784678d8c b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + git init
remote: + git config advice.detachedHead false
remote: + git remote add origin /home/dokku/myapp
remote: + git fetch --depth=1 origin refs/tags/dokku/b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + rm -rf /tmp/dokku_git.kXDG
remote: + exit_code=141
remote: + set -e
remote: + [[ 141 -eq 10 ]]
remote: + implemented=1
remote: + [[ 141 -ne 0 ]]
remote: + exit 141
remote: + exit_code=141
remote: + set -e
remote: + [[ 141 -eq 10 ]]
remote: + implemented=1
remote: + [[ 141 -ne 0 ]]
remote: + exit 141
remote: + exit_code=141
remote: + set -e
remote: + [[ 141 -eq 10 ]]
remote: + implemented=1
remote: + [[ 141 -ne 0 ]]
remote: + exit 141
+ exit_code=0
+ set -e
+ [[ 0 -eq 10 ]]
+ implemented=1
+ [[ 0 -ne 0 ]]
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/letsencrypt/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/logs/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/memcached/commands git-receive-pack ''\''myapp'\'''
+ PLUGIN_BASE_PATH=/var/lib/dokku/plugins
+ [[ -n 1 ]]
+ PLUGIN_BASE_PATH=/var/lib/dokku/plugins/enabled
+ source /var/lib/dokku/plugins/enabled/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x++ dirname /var/lib/dokku/plugins/enabled/memcached/commands
+ source /var/lib/dokku/plugins/enabled/memcached/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+++ dirname /var/lib/dokku/plugins/enabled/memcached/commands
++ source /var/lib/dokku/plugins/enabled/memcached/config
+++ export MEMCACHED_IMAGE=memcached
+++ MEMCACHED_IMAGE=memcached
+++ export MEMCACHED_IMAGE_VERSION=1.4.25
+++ MEMCACHED_IMAGE_VERSION=1.4.25
+++ export MEMCACHED_ROOT=/var/lib/dokku/services/memcached
+++ MEMCACHED_ROOT=/var/lib/dokku/services/memcached
+++ export PLUGIN_COMMAND_PREFIX=memcached
+++ PLUGIN_COMMAND_PREFIX=memcached
+++ export PLUGIN_DATA_ROOT=/var/lib/dokku/services/memcached
+++ PLUGIN_DATA_ROOT=/var/lib/dokku/services/memcached
+++ PLUGIN_DATASTORE_PORTS=(11211)
+++ export PLUGIN_DATASTORE_PORTS
+++ export PLUGIN_DEFAULT_ALIAS=MEMCACHED
+++ PLUGIN_DEFAULT_ALIAS=MEMCACHED
+++ export PLUGIN_ALT_ALIAS=DOKKU_MEMCACHED
+++ PLUGIN_ALT_ALIAS=DOKKU_MEMCACHED
+++ export PLUGIN_IMAGE=memcached
+++ PLUGIN_IMAGE=memcached
+++ export PLUGIN_IMAGE_VERSION=1.4.25
+++ PLUGIN_IMAGE_VERSION=1.4.25
+++ export PLUGIN_SCHEME=memcached
+++ PLUGIN_SCHEME=memcached
+++ export PLUGIN_SERVICE=Memcached
+++ PLUGIN_SERVICE=Memcached
++ dirname /var/lib/dokku/plugins/enabled/memcached/commands
+ source /var/lib/dokku/plugins/enabled/memcached/config
++ export MEMCACHED_IMAGE=memcached
++ MEMCACHED_IMAGE=memcached
++ export MEMCACHED_IMAGE_VERSION=1.4.25
++ MEMCACHED_IMAGE_VERSION=1.4.25
++ export MEMCACHED_ROOT=/var/lib/dokku/services/memcached
++ MEMCACHED_ROOT=/var/lib/dokku/services/memcached
++ export PLUGIN_COMMAND_PREFIX=memcached
++ PLUGIN_COMMAND_PREFIX=memcached
++ export PLUGIN_DATA_ROOT=/var/lib/dokku/services/memcached
++ PLUGIN_DATA_ROOT=/var/lib/dokku/services/memcached
++ PLUGIN_DATASTORE_PORTS=(11211)
++ export PLUGIN_DATASTORE_PORTS
++ export PLUGIN_DEFAULT_ALIAS=MEMCACHED
++ PLUGIN_DEFAULT_ALIAS=MEMCACHED
++ export PLUGIN_ALT_ALIAS=DOKKU_MEMCACHED
++ PLUGIN_ALT_ALIAS=DOKKU_MEMCACHED
++ export PLUGIN_IMAGE=memcached
++ PLUGIN_IMAGE=memcached
++ export PLUGIN_IMAGE_VERSION=1.4.25
++ PLUGIN_IMAGE_VERSION=1.4.25
++ export PLUGIN_SCHEME=memcached
++ PLUGIN_SCHEME=memcached
++ export PLUGIN_SERVICE=Memcached
++ PLUGIN_SERVICE=Memcached
+ [[ git-receive-pack == memcached:* ]]
+ [[ -d /var/lib/dokku/services/memcached/* ]]
+ case "$1" in
+ exit 10
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/nginx-vhosts/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/plugin/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/postgres/commands git-receive-pack ''\''myapp'\'''
+ PLUGIN_BASE_PATH=/var/lib/dokku/plugins
+ [[ -n 1 ]]
+ PLUGIN_BASE_PATH=/var/lib/dokku/plugins/enabled
+ source /var/lib/dokku/plugins/enabled/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
++ dirname /var/lib/dokku/plugins/enabled/postgres/commands
+ source /var/lib/dokku/plugins/enabled/postgres/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+++ dirname /var/lib/dokku/plugins/enabled/postgres/commands
++ source /var/lib/dokku/plugins/enabled/postgres/config
+++ export POSTGRES_IMAGE=postgres
+++ POSTGRES_IMAGE=postgres
+++ export POSTGRES_IMAGE_VERSION=9.5.0
+++ POSTGRES_IMAGE_VERSION=9.5.0
+++ export POSTGRES_ROOT=/var/lib/dokku/services/postgres
+++ POSTGRES_ROOT=/var/lib/dokku/services/postgres
+++ export PLUGIN_COMMAND_PREFIX=postgres
+++ PLUGIN_COMMAND_PREFIX=postgres
+++ export PLUGIN_DATA_ROOT=/var/lib/dokku/services/postgres
+++ PLUGIN_DATA_ROOT=/var/lib/dokku/services/postgres
+++ PLUGIN_DATASTORE_PORTS=(5432)
+++ export PLUGIN_DATASTORE_PORTS
+++ export PLUGIN_DEFAULT_ALIAS=DATABASE
+++ PLUGIN_DEFAULT_ALIAS=DATABASE
+++ export PLUGIN_ALT_ALIAS=DOKKU_POSTGRES
+++ PLUGIN_ALT_ALIAS=DOKKU_POSTGRES
+++ export PLUGIN_IMAGE=postgres
+++ PLUGIN_IMAGE=postgres
+++ export PLUGIN_IMAGE_VERSION=9.5.0
+++ PLUGIN_IMAGE_VERSION=9.5.0
+++ export PLUGIN_SCHEME=postgres
+++ PLUGIN_SCHEME=postgres
+++ export PLUGIN_SERVICE=Postgres
+++ PLUGIN_SERVICE=Postgres
++ dirname /var/lib/dokku/plugins/enabled/postgres/commands
+ source /var/lib/dokku/plugins/enabled/postgres/config
++ export POSTGRES_IMAGE=postgres
++ POSTGRES_IMAGE=postgres
++ export POSTGRES_IMAGE_VERSION=9.5.0
++ POSTGRES_IMAGE_VERSION=9.5.0
++ export POSTGRES_ROOT=/var/lib/dokku/services/postgres
++ POSTGRES_ROOT=/var/lib/dokku/services/postgres
++ export PLUGIN_COMMAND_PREFIX=postgres
++ PLUGIN_COMMAND_PREFIX=postgres
++ export PLUGIN_DATA_ROOT=/var/lib/dokku/services/postgres
++ PLUGIN_DATA_ROOT=/var/lib/dokku/services/postgres
++ PLUGIN_DATASTORE_PORTS=(5432)
++ export PLUGIN_DATASTORE_PORTS
++ export PLUGIN_DEFAULT_ALIAS=DATABASE
++ PLUGIN_DEFAULT_ALIAS=DATABASE
++ export PLUGIN_ALT_ALIAS=DOKKU_POSTGRES
++ PLUGIN_ALT_ALIAS=DOKKU_POSTGRES
++ export PLUGIN_IMAGE=postgres
++ PLUGIN_IMAGE=postgres
++ export PLUGIN_IMAGE_VERSION=9.5.0
++ PLUGIN_IMAGE_VERSION=9.5.0
++ export PLUGIN_SCHEME=postgres
++ PLUGIN_SCHEME=postgres
++ export PLUGIN_SERVICE=Postgres
++ PLUGIN_SERVICE=Postgres
+ [[ git-receive-pack == postgres:* ]]
+ [[ -d /var/lib/dokku/services/postgres/* ]]
+ case "$1" in
+ exit 10
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/proxy/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/ps/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/shell/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/storage/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/tags/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/tar/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ [[ 1 -eq 0 ]]
+ set +f
+ exit 0
```
And this when rebuilding:
```
+ case "$(lsb_release -si)" in
++ lsb_release -si
+ export DOKKU_DISTRO=ubuntu
+ DOKKU_DISTRO=ubuntu
+ export DOKKU_IMAGE=gliderlabs/herokuish
+ DOKKU_IMAGE=gliderlabs/herokuish
+ export DOKKU_LIB_ROOT=/var/lib/dokku
+ DOKKU_LIB_ROOT=/var/lib/dokku
+ export PLUGIN_PATH=/var/lib/dokku/plugins
+ PLUGIN_PATH=/var/lib/dokku/plugins
+ export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ export DOKKU_API_VERSION=1
+ DOKKU_API_VERSION=1
+ export DOKKU_NOT_IMPLEMENTED_EXIT=10
+ DOKKU_NOT_IMPLEMENTED_EXIT=10
+ export DOKKU_VALID_EXIT=0
+ DOKKU_VALID_EXIT=0
+ export DOKKU_LOGS_DIR=/var/log/dokku
+ DOKKU_LOGS_DIR=/var/log/dokku
+ export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ export DOKKU_CONTAINER_LABEL=dokku
+ DOKKU_CONTAINER_LABEL=dokku
+ export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ parse_args ps:rebuild myapp
+ declare 'desc=top-level cli arg parser'
+ local next_index=1
+ local skip=false
+ args=("$@")
+ local args
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=2
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=3
+ return 0
+ args=("$@")
+ [[ ps:rebuild =~ ^--.* ]]
+ has_tty
+ declare 'desc=return 0 if we have a tty'
++ /usr/bin/tty
+ [[ /dev/pts/0 == \n\o\t\ \a\ \t\t\y ]]
+ return 0
++ id -un
+ [[ root != \d\o\k\k\u ]]
+ [[ ! ps:rebuild =~ plugin:* ]]
++ id -un
+ export SSH_USER=root
+ SSH_USER=root
+ sudo -u dokku -E -H /usr/local/bin/dokku ps:rebuild myapp
+ case "$(lsb_release -si)" in
++ lsb_release -si
+ export DOKKU_DISTRO=ubuntu
+ DOKKU_DISTRO=ubuntu
+ export DOKKU_IMAGE=gliderlabs/herokuish
+ DOKKU_IMAGE=gliderlabs/herokuish
+ export DOKKU_LIB_ROOT=/var/lib/dokku
+ DOKKU_LIB_ROOT=/var/lib/dokku
+ export PLUGIN_PATH=/var/lib/dokku/plugins
+ PLUGIN_PATH=/var/lib/dokku/plugins
+ export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ export DOKKU_API_VERSION=1
+ DOKKU_API_VERSION=1
+ export DOKKU_NOT_IMPLEMENTED_EXIT=10
+ DOKKU_NOT_IMPLEMENTED_EXIT=10
+ export DOKKU_VALID_EXIT=0
+ DOKKU_VALID_EXIT=0
+ export DOKKU_LOGS_DIR=/var/log/dokku
+ DOKKU_LOGS_DIR=/var/log/dokku
+ export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ export DOKKU_CONTAINER_LABEL=dokku
+ DOKKU_CONTAINER_LABEL=dokku
+ export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ parse_args ps:rebuild myapp
+ declare 'desc=top-level cli arg parser'
+ local next_index=1
+ local skip=false
+ args=("$@")
+ local args
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=2
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=3
+ return 0
+ args=("$@")
+ [[ ps:rebuild =~ ^--.* ]]
+ has_tty
+ declare 'desc=return 0 if we have a tty'
++ /usr/bin/tty
+ [[ /dev/pts/0 == \n\o\t\ \a\ \t\t\y ]]
+ return 0
++ id -un
+ [[ dokku != \d\o\k\k\u ]]
++ id -un
+ [[ dokku != \r\o\o\t ]]
+ [[ ps:rebuild =~ ^plugin:.* ]]
+ [[ -n '' ]]
+ dokku_auth ps:rebuild myapp
+ declare 'desc=calls user-auth plugin trigger'
+ export SSH_USER=root
+ SSH_USER=root
+ export SSH_NAME=default
+ SSH_NAME=default
+ plugn trigger user-auth root default ps:rebuild myapp
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ [[ ! -n '' ]]
+ return 0
+ case "$1" in
+ execute_dokku_cmd ps:rebuild myapp
+ declare 'desc=executes dokku sub-commands'+ local PLUGIN_NAME=ps:rebuild
+ local PLUGIN_CMD=ps:rebuild
+ local implemented=0
+ local script
+ argv=("$@")
+ local argv
+ case "$PLUGIN_NAME" in
++ readlink -f /var/lib/dokku/plugins/enabled/ps
+ [[ /var/lib/dokku/core-plugins/available/ps == *core-plugins* ]]
+ [[ ps:rebuild == \p\s\:\r\e\b\u\i\l\d ]]
+ shift 1
+ [[ ! -z '' ]]
+ set -- ps:rebuild myapp
+ [[ -x /var/lib/dokku/plugins/enabled/ps:rebuild/subcommands/default ]]
+ [[ -x /var/lib/dokku/plugins/enabled/ps:rebuild/subcommands/ps:rebuild ]]
+ [[ -x /var/lib/dokku/plugins/enabled/ps/subcommands/rebuild ]]
+ /var/lib/dokku/plugins/enabled/ps/subcommands/rebuild ps:rebuild myapp
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ source /var/lib/dokku/plugins/available/ps/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x++ source /var/lib/dokku/core-plugins/available/common/functions
+++ set -eo pipefail
+++ [[ -n 1 ]]
+++ set -x
++ source /var/lib/dokku/plugins/available/config/functions
+++ set -eo pipefail
+++ [[ -n 1 ]]
+++ set -x
+++ source /var/lib/dokku/core-plugins/available/common/functions
++++ set -eo pipefail
++++ [[ -n 1 ]]
++++ set -x
+ ps_rebuild_cmd ps:rebuild myapp
+ declare 'desc=rebuilds app via command line'
+ local cmd=ps:rebuild
+ [[ -z myapp ]]
+ ps_rebuild myapp
+ declare 'desc=rebuilds app from base image'
+ local APP=myapp
+ verify_app_name myapp
+ declare 'desc=verify app name format and app existence'
+ local APP=myapp
+ [[ ! -n myapp ]]
+ [[ ! myapp =~ ^[a-z].* ]]
+ [[ ! -d /home/dokku/myapp ]]
+ return 0
+ plugn trigger receive-app myapp
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ [[ ! -n '' ]]
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ git_receive_app myapp
+ declare 'desc=git receive-app plugin trigger'
+ local trigger=git_receive_app
+ local APP=myapp
+ local REV=
+ [[ ! -d /home/dokku/myapp/refs ]]
+ dokku git-build myapp
+ case "$(lsb_release -si)" in
++ lsb_release -si
+ export DOKKU_DISTRO=ubuntu
+ DOKKU_DISTRO=ubuntu
+ export DOKKU_IMAGE=gliderlabs/herokuish
+ DOKKU_IMAGE=gliderlabs/herokuish
+ export DOKKU_LIB_ROOT=/var/lib/dokku
+ DOKKU_LIB_ROOT=/var/lib/dokku
+ export PLUGIN_PATH=/var/lib/dokku/plugins
+ PLUGIN_PATH=/var/lib/dokku/plugins
+ export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ export DOKKU_API_VERSION=1
+ DOKKU_API_VERSION=1
+ export DOKKU_NOT_IMPLEMENTED_EXIT=10
+ DOKKU_NOT_IMPLEMENTED_EXIT=10
+ export DOKKU_VALID_EXIT=0
+ DOKKU_VALID_EXIT=0
+ export DOKKU_LOGS_DIR=/var/log/dokku
+ DOKKU_LOGS_DIR=/var/log/dokku
+ export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ export DOKKU_CONTAINER_LABEL=dokku
+ DOKKU_CONTAINER_LABEL=dokku
+ export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ parse_args git-build myapp
+ declare 'desc=top-level cli arg parser'
+ local next_index=1
+ local skip=false
+ args=("$@")
+ local args
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=2
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=3
+ return 0
+ args=("$@")
+ [[ git-build =~ ^--.* ]]
+ has_tty
+ declare 'desc=return 0 if we have a tty'
++ /usr/bin/tty
+ [[ /dev/pts/0 == \n\o\t\ \a\ \t\t\y ]]
+ return 0
++ id -un
+ [[ dokku != \d\o\k\k\u ]]
++ id -un
+ [[ dokku != \r\o\o\t ]]
+ [[ git-build =~ ^plugin:.* ]]
+ [[ -n '' ]]
+ dokku_auth git-build myapp
+ declare 'desc=calls user-auth plugin trigger'
+ export SSH_USER=root
+ SSH_USER=root
+ export SSH_NAME=default
+ SSH_NAME=default
+ plugn trigger user-auth root default git-build myapp
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ [[ ! -n '' ]]
+ return 0
+ case "$1" in
+ execute_dokku_cmd git-build myapp
+ declare 'desc=executes dokku sub-commands'
+ local PLUGIN_NAME=git-build
+ local PLUGIN_CMD=git-build
+ local implemented=0
+ local script
+ argv=("$@")
+ local argv
+ case "$PLUGIN_NAME" in
++ readlink -f /var/lib/dokku/plugins/enabled/git-build
+ [[ /var/lib/dokku/plugins/enabled/git-build == *core-plugins* ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-build/subcommands/default ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-build/subcommands/git-build ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-build/subcommands/git-build ]]
+ [[ 0 -eq 0 ]]
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/00_dokku-standard/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/20_events/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/apps/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/certs/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/checks/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/config/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/docker-options/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/domains/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/enter/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/git/commands git-build myapp
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ source /var/lib/dokku/plugins/available/apps/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
++ source /var/lib/dokku/core-plugins/available/common/functions
+++ set -eo pipefail
+++ [[ -n 1 ]]
+++ set -x
++ source /var/lib/dokku/plugins/available/config/functions
+++ set -eo pipefail
+++ [[ -n 1 ]]
+++ set -x
+++ source /var/lib/dokku/core-plugins/available/common/functions
++++ set -eo pipefail
++++ [[ -n 1 ]]
++++ set -x
+ source /var/lib/dokku/plugins/available/config/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
++ source /var/lib/dokku/core-plugins/available/common/functions
+++ set -eo pipefail
+++ [[ -n 1 ]]
+++ set -x
+ case "$1" in
+ git_build_cmd git-build myapp
+ declare 'desc=lock git-build'
+ local cmd=git-build
+ local APP=myapp
+ local APP_BUILD_LOCK=/home/dokku/myapp/.build.lock
+ local 'APP_BUILD_LOCK_MSG=myapp is currently being deployed or locked. Waiting...'
++ flock -n /home/dokku/myapp/.build.lock true
++ echo 0
+ [[ 0 -ne 0 ]]
+ shift 1
+ flock -o /home/dokku/myapp/.build.lock dokku git-build-locked myapp
+ case "$(lsb_release -si)" in
++ lsb_release -si
+ export DOKKU_DISTRO=ubuntu
+ DOKKU_DISTRO=ubuntu
+ export DOKKU_IMAGE=gliderlabs/herokuish
+ DOKKU_IMAGE=gliderlabs/herokuish
+ export DOKKU_LIB_ROOT=/var/lib/dokku
+ DOKKU_LIB_ROOT=/var/lib/dokku
+ export PLUGIN_PATH=/var/lib/dokku/plugins
+ PLUGIN_PATH=/var/lib/dokku/plugins
+ export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ export DOKKU_API_VERSION=1
+ DOKKU_API_VERSION=1
+ export DOKKU_NOT_IMPLEMENTED_EXIT=10
+ DOKKU_NOT_IMPLEMENTED_EXIT=10
+ export DOKKU_VALID_EXIT=0
+ DOKKU_VALID_EXIT=0
+ export DOKKU_LOGS_DIR=/var/log/dokku
+ DOKKU_LOGS_DIR=/var/log/dokku
+ export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ export DOKKU_CONTAINER_LABEL=dokku
+ DOKKU_CONTAINER_LABEL=dokku
+ export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ parse_args git-build-locked myapp
+ declare 'desc=top-level cli arg parser'
+ local next_index=1
+ local skip=false
+ args=("$@")
+ local args
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=2
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=3
+ return 0
+ args=("$@")
+ [[ git-build-locked =~ ^--.* ]]
+ has_tty
+ declare 'desc=return 0 if we have a tty'
++ /usr/bin/tty
+ [[ /dev/pts/0 == \n\o\t\ \a\ \t\t\y ]]
+ return 0
++ id -un
+ [[ dokku != \d\o\k\k\u ]]
++ id -un
+ [[ dokku != \r\o\o\t ]]
+ [[ git-build-locked =~ ^plugin:.* ]]
+ [[ -n '' ]]
+ dokku_auth git-build-locked myapp
+ declare 'desc=calls user-auth plugin trigger'
+ export SSH_USER=root
+ SSH_USER=root
+ export SSH_NAME=default
+ SSH_NAME=default
+ plugn trigger user-auth root default git-build-locked myapp
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ [[ ! -n '' ]]
+ return 0
+ case "$1" in
+ execute_dokku_cmd git-build-locked myapp
+ declare 'desc=executes dokku sub-commands'
+ local PLUGIN_NAME=git-build-locked
+ local PLUGIN_CMD=git-build-locked
+ local implemented=0
+ local script
+ argv=("$@")
+ local argv
+ case "$PLUGIN_NAME" in
++ readlink -f /var/lib/dokku/plugins/enabled/git-build-locked
+ [[ /var/lib/dokku/plugins/enabled/git-build-locked == *core-plugins* ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-build-locked/subcommands/default ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-build-locked/subcommands/git-build-locked ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-build-locked/subcommands/git-build-locked ]]
+ [[ 0 -eq 0 ]]
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/00_dokku-standard/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/20_events/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/apps/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/certs/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/checks/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/config/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/docker-options/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/domains/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/enter/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
```
Can you remove the redirect to stderr on [this line](https://github.com/dokku/dokku/blob/a9a9d0a898c23a1df9252b6ae1f8fecc2ff1be4e/plugins/git/functions#L57) and try pushing again?
Ok, I get this:
```
remote: + git fetch --depth=1 origin refs/tags/dokku/b4287d41e948d5bd423edd1d5528d86784678d8c
remote: fatal: write error: No space left on device
```
But I have a different version of that git file you linked to, it is not called `functions` but `commands` (`dokku/plugins/enabled/git/commands`).
Okay, so even though I cleaned up there is no space. Hm. Any idea how I can make more space? Most of the space is taken in /var/lib/docker/aufs/diff.
It is a 30 GB server and there is only one app. :)
Here is the disk space info if that helps:
```Filesystem Size Used Avail Use% Mounted on
udev 487M 4.0K 487M 1% /dev
tmpfs 100M 552K 99M 1% /run
/dev/disk/by-label/DOROOT 30G 27G1.4G 96% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 497M 2.8M 495M 1% /run/shm
none 100M 0 100M 0% /run/user
overflow 1.0M 0 1.0M 0% /tmp
none
30G 27G 1.4G 96%
/var/lib/docker/aufs/mnt/6cf7122027c0fc8f30350803dc83d016398b69bf17e5b69349b8559718d787f6
shm 64M 0 64M 0% /var/lib/docker/containers/5d0b9c824ea6a7b05c73293f5ac92e8bc6060cd9b69d48aef2a90555a6124a4c/shm
none 30G 27G 1.4G 96% /var/lib/docker/aufs/mnt/96c252a39a50a42b7f30b641c4c35bf3fe4fc339990a7127d7a7f6a83aad4e75
shm 64M 0 64M 0% /var/lib/docker/containers/f149b478793abf788bb3c13b89509965e678083db541cc7a762059dcbcdb2737/shm
none 30G 27G 1.4G 96% /var/lib/docker/aufs/mnt/6a54418da4a675d48752d2bfc679ea3d72b30fd744732450d3208931edfea689
shm 64M 4.0K 64M 1% /var/lib/docker/containers/0e442500e4008b22e8247296a9a79e67fe4aeac3526dc265fa1fa633e77ef77b/shm
none 30G 27G 1.4G 96% /var/lib/docker/aufs/mnt/947a39aeaadd4774c45360bb6bd792ca94cbae4570e29147fad213e188137e70
shm 64M 0 64M 0%
/var/lib/docker/containers/64af2fc33af08a1e7907d5ff2f419a5187c8c170eace4f6a08a65ac6484d3e53/shm
none 30G 27G 1.4G 96%
/var/lib/docker/aufs/mnt/163a716c9ff437e0dc48aea195b5907cb78e820aa4838a65efb7c8849a66e361
shm 64M 0 64M 0%
/var/lib/docker/containers/3cb07b2cd8962f3c9e27d032e3bdd033bcfb4e723f943a5cc2193907ba662919/shm
```
I think my issue is this docker issue: https://github.com/moby/moby/issues/22207
Okay, was running one of the scripts suggested in the scripts which broke my build, but made space, so after a rebuild (which worked again), the app started up. I was also able to push updates again. However the server disk is still very full (90%). Wonder how to properly solve this. Seems there are still over 20GB unused files.
My guess
is a restart will clear out any files that are still there lazily -
like log files. Once you restart, just `ps:rebuild` your app and call it a day?
I did a rebuild, unfortunately the /var/lib/docker/aufs/diff is still huge I need to solve this. Server is almost full again. But not sure if it has to do with dokku at all or if it is a docker issue. At least i updated docker, dokku
and nginx now. There is a script that seems to fix this temporarily:
https://gist.github.com/Karreg/84206b9711cbc6d0fbbe77a57f705979 And it
is also suggested
that the latest docker version doesn't have the issue anymore. However,
that script will delete all volumes, containers and images. What would I need to do to get my dokku app running again after I run it?
In theory we persist all data to disk, so unless you are using random volumes, you *should* be fine to run it and then `ps:rebuild` your application. The `ps:rebuild` will pull down a new herokuish - or any other stuff you need for the build.
You could also try upgrading docker, which maybe will fix it automatically according to comments in the issue.
Note: this isn't a Dokku issue, but a docker one. We can't do anything about diff files laying around, so unless thats the new normal - great! - then I'm going to leave it as is.
Ran the script but now the memcached container is not found when doing the rebuild. Do I have to set it up separately?
Can you hop on the slack #dokku for support?
I have slack with our own workspace, how can I get into #dokku?
```
dokku memcached:list
Error: No such object: dokku.memcached.name
Error: No such object: 5d0b9c824ea6a7b05c73293f5ac92e8bc6060cd9b69d48aef2a90555a6124a4cError: No such object: 5d0b9c824ea6a7b05c73293f5ac92e8bc6060cd9b69d48aef2a90555a6124a4c
Error: No such object: 5d0b9c824ea6a7b05c73293f5ac92e8bc6060cd9b69d48aef2a90555a6124a4c
Error: No such object: 5d0b9c824ea6a7b05c73293f5ac92e8bc6060cd9b69d48aef2a90555a6124a4c
Error: No such object: 5d0b9c824ea6a7b05c73293f5ac92e8bc6060cd9b69d48aef2a90555a6124a4c
NAME VERSION STATUS EXPOSED PORTS LINKS
name stopped - app-name
```
Same with postgres.
http://dokku.viewdocs.io/dokku/getting-started/where-to-get-help/
do i need to create
new services? hope i didnt lose my db?
Please jump on our slack chat to get support, github issues isn't the
best method for discussing your particular issue. I'll update the ticket
once we've gone through everything.
Using postgres as an example, if you ever have to recover from suddenly
deleting/losing all your containers/images/volumes, fear not! Dokku
stores the data on disk, so as long as you aren't mucking about in your
`/var/lib/dokku` directory, the following will get your services running
again:
```shell
# note what version of the service is running (in this case,
9.5.9 for postgres)
# update the plugin so we get any necessary fixes!
dokku plugin:update postgres
# stop the running version
dokku postgres:stop SERVICE_NAME
# export the correct image version or you'll be auto-updated to the
latest, which might result in an error depending on your database
export POSTGRES_IMAGE_VERSION=9.5.9
# start the service
dokku postgres:start SERVICE_NAME
```
Once your services are running, a `ps:rebuild` will properly rebuild
your applications.
Sorry to comment on a close thread. I had the exact same problem as @ncri. _git push_ was still failing and complaining about no disk space remaining even though there were plenty once I deleted unused containers. Restarting the docker service didn't help.
Simply rebooting my server (sudo reboot) fixed the issue for me.
`dokku cleanup` command does not work for me either, no idea why. I have to run the docker commands every time I run out of space
@gabrielhpugliese Without knowing more about your setup, debugging this is going to be hard. Please file a new issue and include all the information we ask for in the issue template.空React version: 17.0.0.rc2
## Steps To Reproduce
1. Install in dev mode
2. all events, click, onBlur all of them are too slowly every time you click in a input or something freeze the browser
And even more if you open the chrome dev tools or others becomes more slow+It doesn't look like this bug report has enough info for one of us to reproduce it.
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a repository on GitHub, or provide a minimal code example that reproduces the problem. Screenshots or videos can also be helpful if they help provide context on how to repro the bug.
Here are some tips for providing a minimal example: https://stackoverflow.com/help/mcve
cc @trueadm
@joacub Are there any DEV warnings in the console during this period of slowness?
| @joacub Are there any DEV warnings in the console during this period of slowness?
No there is no warnings, i do a js profiler and react call severals times to one same function over and over, i will share this soon
this is related
https://github.com/facebook/react/issues/19958
here is the profiler:
|img width="1250" alt="Screen Shot 2020-10-05 at 2 07 28 PM" src="https://user-images.githubusercontent.com/2091228/95115875-2a93c600-0714-11eb-8bb8-1dcbb2eadf3e.png"||img width="1257" alt="Screen Shot 2020-10-05 at 2 06 53 PM" src="https://user-images.githubusercontent.com/2091228/95115884-2d8eb680-0714-11eb-812a-932271847e0d.png"|
and there is more and more calls to the same function and the other function discreteUpdates too take long:
|img width="1255" alt="Screen Shot 2020-10-05 at 2 09 07 PM" src="https://user-images.githubusercontent.com/2091228/95116009-5e6eeb80-0714-11eb-902c-ce62bb40f4d2.png"|
That is a recursive function that React calls once per component with mutation effects (e.g. add `|div|` to the DOM).
| this is related
#19958
Possibly, though hard to say with confidence yet.
| That is a recursive function that React calls once per component with mutation effects (e.g. add `|div|` to the DOM).
|
| | this is related
| | #19958
|
| Possibly, though hard to say with confidence yet.
like 1000 times ? what do you mean mutation effects? this ---| **document.appendChild**
or what ?
this works perfectly in production and in safari and when the dev tools
is close works better but still slowly.
im even more im sure im not using mutation effect like
**document.appendChild** if yo are talking about that.
@joacub Brian is saying it is absolutely *expected* to see that function
in the stack trace a lot — basically, once per `|div|` or a component
that you're mounting. What would not be expected is seeing it more times
than the nesting of your tree. It's hard to say anything from your
screenshots — it would really help if you could either isolate the
problematic behavior in an example or share your project or deploy it
somewhere.
| @joacub Brian is saying it is absolutely _expected_ to see that
function in the stack trace a lot — basically, once per `|div|` or a
component that you're mounting. What would not be expected is seeing it
more times than the nesting of your tree. It's hard to say anything from
your screenshots — it would really help if you could either isolate the
problematic behavior in an example or share your project or deploy it
somewhere.
project is private so I can't share, but I understand what he said, im
not remounting all tree is just one button nothing else, that's happens
in all click events, all of my components are using React.memo with ()
=| true always so never has to be remounted or called. and more that
click is in one separate element which is in outside of my tree child
components.
before react 17 this was working pretty good now is unstable 100%, I
can't even work
A little more context: Even if only one component in your tree has a
mutation effect, React will recurse from the root down to that
component.
```
A
B C
D E
F G
```
Let's say G has a mutation effect, React would still recurse from A -| B
-| D -| G to apply the effect. The deeper your tree is, the taller the
call stack would be.
| before react 17 this was working pretty good now is unstable 100%, I
can't even work
I don't know what this means.
It's very difficult for us to guess about what might be causing this
issue without a repro. If you can't share your private app with us, can you make a reduced case that shows the problem on Code Sandbox?
|project is private so I can't share
Is it deployed somewhere in production that we could access? E.g. maybe it's a live website.
| | project is private so I can't share
|
| Is it deployed somewhere in production that we could access? E.g. maybe it's a live website.
yes but production works perfect, just in dev mode is super slow.
you can access here bringerparcel.dev
| A little more context: Even if only one component in your tree has a mutation effect, React will recurse from the root down to that component.
|
| ```
| A
| B C
| D E
| F G
| ```
|
| Let's say G has a mutation effect, React would still recurse from A -| B -| D -| G to apply the effect. The deeper your tree is, the taller the call stack would be.
|
| | before react 17 this was working pretty good now is unstable 100%, I can't even work
|
| I don't know what this means.
|
| It's very difficult for us to guess about what might be causing this issue without a repro. If you can't share your private app with us, can you make a reduced case that shows the problem on
Code Sandbox?
what I mean with this
| before react 17 this was working pretty good now is unstable 100%, I
can't even work
is... in react 16.3.1 this issue does not exist
| yes but production works perfect, just in dev mode is super slow.
|
| you can access here bringerparcel.dev
That is a production build, so it doesn't really help us reproduce the
problem.
| A little more context: Even if only one component in your tree has a
mutation effect, React will recurse from the root down to that
component.
|
| ```
| A
| B C
| D E| F G
| ```
|
| Let's say G has a mutation effect,
React would still recurse from A -| B -| D -| G to apply the effect.
The deeper your tree is, the taller the call stack would be.
|
| | before react 17 this was working pretty good now is unstable 100%, I can't even work
| | I don't know what this means.
|
| It's very difficult for
us to guess about what might be causing this issue without a repro. If
you can't share your private app with us, can you make a reduced case
that shows the problem on Code Sandbox?
i understand what react is doing but this is normal to take like almost 3
seconds per click ? that's not normal.
i have no enough time for reproduce this in code sandbox, but how I
said, the error is very simple, every click event is taking forever
| @joacub Brian is saying it is absolutely _expected_ to see that
function in the stack trace a lot — basically, once per `|div|` or a
component that you're mounting. What would not be expected is seeing it
more times than the nesting of your tree. It's hard to say anything from
your screenshots — it would really help if you could either isolate the
problematic behavior in an example or share your project or deploy it
somewhere.
I understand, but even If I using React.memo in my component ? I guess
yes, but im sure I don't have that nest tree at all, that's insane, I
can't even reach the last call function in the tree, is endless.
| i understand what react is doing but this is
normal to take like almost 3 seconds per click ? that's not normal.
This discussion isn't really productive without any way for us to repro
the problem. We're kind of just going around in circles.
We have been using what will be version 17 of React at Facebook for several weeks (it changes a little each week) and we have not seen the behavior you're describing. So I would agree that it's not normal.
| | i understand what react is doing but this is normal to take like almost
3 seconds per click ? that's not normal.
|
| This discussion isn't really productive without any way for us to
repro the problem. We're kind of just going around in circles.
|
| We have been using what will be version 17 of React at Facebook for several weeks (it changes a little each week) and we have not seen the behavior you're describing. So I would agree that it's
not normal.
I will try get some time for reproduce this in codesandbox
i have n time for share in code sandbox but I test in my environment and
I removed all the components in the tree and just leave just one, the
behavior is the same super slow every onClick event, took the same time
always, this is not related to the mutation effect in the tree, this is
related to the events handler or something, how I said and for be clear,
I leave just some components and the behavior is the same endless
recursively calls.
i did more tests in my project removing more and more dev tools, is just react who has the issue, there is
something related to state changes or events handler, sorry if I not
been much useful but I have no much time for it. I Appreciate your
concern and work.
Since you’re saying this is DEV-only, this information so far is not helpful without a project we can run locally.
I understand you can’t extract it to a sandbox, but maybe you could put a reduced version on GH?
@gaearon i've sent you an email to schedule a moment to test it with ngrok to give you access to localhost
@gaearon if you are able to do the test via ngrok we can do now.
Awesome, thanks. I'm not available now but I'll respond later.
---
I think I actually have a guess about what's going on. I think we're calling `invokeGuardedCallback` a lot more times than before.
Looking at RC1, we had code like this:
```js
do {
{
invokeGuardedCallback(null, commitLayoutEffects, null, root, lanes);
if (hasCaughtError()) {
if (!(nextEffect !== null)) {
{
throw Error( "Should be working on an effect." );
}
}
var _error2 = clearCaughtError();
captureCommitPhaseError(nextEffect, nextEffect.return, _error2);
nextEffect = nextEffect.nextEffect;
}
}
} while (nextEffect !== null);
```
but the actual `commitLayoutEffects` implementation had its own loop, avoiding repeated `invokeGuardedCallback` calls:
```js
function commitLayoutEffects(root, committedLanes) {
while (nextEffect !== null) {
setCurrentFiber(nextEffect);
var effectTag = nextEffect.effectTag;
if (effectTag | (Update | Callback)) {
var current = nextEffect.alternate;
commitLifeCycles(root, current, nextEffect);
}
{
if (effectTag | Ref) {
commitAttachRef(nextEffect);
}
}
resetCurrentFiber();
nextEffect = nextEffect.nextEffect;
}
}
```
Whereas in RC2, we recursively call `commitLayoutEffects` which wraps *every* effect into its own `invokeGuardedCallback`:
```js
function commitLayoutEffects(firstChild, root, committedLanes) {
var fiber = firstChild;
while (fiber !== null) {
if (fiber.child !== null) {
var primarySubtreeFlags = fiber.subtreeFlags | LayoutMask;
if (primarySubtreeFlags !== NoFlags) {
commitLayoutEffects(fiber.child, root, committedLanes);
}
}
{
setCurrentFiber(fiber);
invokeGuardedCallback(null, commitLayoutEffectsImpl, null, fiber, root, committedLanes);
if (hasCaughtError()) {
var error = clearCaughtError();
captureCommitPhaseError(fiber, fiber.return, error);
}
resetCurrentFiber();
}
fiber = fiber.sibling;
}
}
```
I can easily imagine this adding a lot of DEV overhead since it happens for every Fiber including host ones.
On WWW, `invokeGuardedCallback` is forked to go through a fancy try-catch, which is why we did not see it.
Seems like an easy fix. I'll look tomorrow.
@gaearon legend! is that
@gaearon 🥳🥳
| @gaearon legend! is that
can you please get in touch with me ? contact me at johan@webmediaprojects.com
Seems like we (React team) need to verify if the guarded callback is the source of slowness. We should be able to do that by disabling our internal "fancy try-catch" :smile: We'll keep this thread posted.
| Seems like we (React team) need to verify if the guarded callback is the source of slowness. We should be able to do that by disabling our internal "fancy try-catch" 😄 We'll keep this thread posted.
do you have any ideas or we can change something really quick for at least mitigate for now this ?, _I did try somethings but no successfully._
To "mitigate" it? I believe this is only happening in DEV mode and only if you're using an RC, so if it's blocking your local development, don't use the RC until we've released another one.
@bvaughn well i have the same issue, i had to
back to 16.13.1 and works perfectly is only since 17
@bvaughn in my case I can not go back, there is too many change in my
app in more than 400 files, which is will cause me delays so, if you can suggest something for hot FIX this in the meantime I wait for the next RC I will appreciate.
Thanks
@joacub try to change to 16.13.1 in your package.json, RC versions supose to not be used in production, that one works perfectly i tested it
|
@joacub try to change to 16.13.1 in your package.json, RC versions
supose to not be used in production, that one works perfectly i tested
it
I know, there is no option for me to go back now, that will be a waste or time for me. 😢
I don't understand how you have 400 files changed in a way that wouldn't work with v16. v17 doesn't add new APIs or behaviors. Why can't you just pin to a different version of React?
| I
don't understand how you have 400 files changed in a way that wouldn't
work with v16. v17 doesn't add new APIs or behaviors. Why can't you just
pin to a different version of React?
The new jsx runtime, i have no anymore in my scope react on these file
so i have to do a massive change, i can do with the IDE but i don’t
think is going to save me time now, how i said that will be a waste of time
Just step back down to RC1 then. We'll release a new RC soon and you can try it again.
| Just step back down to RC1 then. We'll release a new RC soon and you can try it again.
I did, is slowly too 😭
| Just step back down to RC1 then. We'll release a new RC soon and you can try it again.
just fyi, rc.1 is just a littler better, not much, still been slowly but not than rc.2
If you have no way to revert, you should not be upgrading to unstable versions. You should use source control and revert undesirable changes, or better yet, not merge them into master in the first place. I'm going to temporarily lock this thread because it's getting very noisy and we're not getting valuable information from it. We'll comment when we figure out the issue.
We've been able to reproduce the problem, so thanks for the report.
For now we've cut RC3 which doesn't have this issue, so please upgrade
to get the fix.
In general though **you should never place yourself in a situation where
you "need" a bugfix for an unstable release**. If you were not prepared
to roll back to 16, you should not have run the codemod or upgraded to
17 RC. The whole point of RCs is that they may contain bugs, so they're
not suitable for production deployments. And if you decide to ship it to users despite the risk, as soon as there's an issue you need to be prepared to roll back using version control.
| We've been able to reproduce the problem, so thanks for the report.
|
| For now we've cut RC3 which doesn't have this issue, so please upgrade to get
the fix.
|
| In general though **you should never place yourself in a situation
where you "need" a bugfix for an unstable release**. If you were not
prepared to roll back to 16, you should not have run the codemod or
upgraded to 17 RC. The whole point of RCs is that they may contain bugs, so they're not suitable for production deployments. And if you decide to ship it to users despite the risk, as soon as there's an issue you need to be prepared to roll back using version control.
Thanks, yes there is no problem, i go back to 17 rc 1, we are working with the last version because we want to release with the last version, thats does not block
us at all, yes we run codemon and thats was the only mistake, but even
with that we still wanting this, we are in dev stage so we can do and is
what we want. Thanks for your work , I appreciate
Can you verify whether RC3 also resolves the issue for you?
| Can you verify whether RC3 also resolves the issue for you?
Yes the issue was resolved, is working great now, thank you so much
@gaearon thank you i will take a look too
(250, 512) todo (2250,)
(500, 512) todo (2000,)
(750, 512) todo (1750,)
(1000, 512) todo (1500,)
(1250, 512) todo (1250,)
(1500, 512) todo (1000,)
(1750, 512) todo (750,)
(2000, 512) todo (500,)
(2250, 512) todo (250,)
(2500, 512) todo (0,)
True class: duplicate
Text with highlighted words
Cursor initially appears in incorrect location on first focus in editor.

Challenge
[Use an ID Attribute to Style an
Element](https://www.freecodecamp.com/challenges/use-an-id-attribute-to-style-an-element#?solution=%3Clink%20href%3D%22https%3A%2F%2Ffonts.googleapis.com%2Fcss%3Ffamily%3DLobster%22%20rel%3D%22stylesheet%22%20type%3D%22text%2Fcss%22%3E%0A%3Cstyle%3E%0A%20%20.red-text%20%7B%0A%20%20%20%20color%3A%20red%3B%0A%20%20%7D%0A%20%20%23cat-photo-form%7B%0A%20%20%20%20background-color%3Agreen%3B%0A%20%20%7D%0A%20%20h2%20%7B%0A%20%20%20%20font-family%3A%20Lobster%2C%20Monospace%3B%0A%20%20%7D%0A%0A%20%20p%20%7B%0A%20%20%20%20font-size%3A%2016px%3B%0A%20%20%20%20font-family%3A%20Monospace%3B%0A%20%20%7D%0A%0A%20%20.thick-green-border%20%7B%0A%20%20%20%20border-color%3A%20green%3B%0A%20%20%20%20border-width%3A%2010px%3B%0A%20%20%20%20border-style%3A%20solid%3B%0A%20%20%20%20border-radius%3A%2050%25%3B%0A%20%20%7D%0A%0A%20%20.smaller-image%20%7B%0A%20%20%20%20width%3A%20100px%3B%0A%20%20%7D%0A%0A%20%20.gray-background%20%7B%0A%20%20%20%20background-color%3A%20gray%3B%0A%20%20%7D%0A%3C%2Fstyle%3E%0A%0A%3Ch2%20class%3D%22red-text%22%3ECatPhotoApp%3C%2Fh2%3E%0A%0A%3Cp%3EClick%20here%20for%20%3Ca%20href%3D%22%23%22%3Ecat%20photos%3C%2Fa%3E.%3C%2Fp%3E%0A%0A%3Ca%20href%3D%22%23%22%3E%3Cimg%20class%3D%22smaller-image%20thick-green-border%22%20alt%3D%22A%20cute%20orange%20cat%20lying%20on%20its%20back%22%20src%3D%22https%3A%2F%2Fbit.ly%2Ffcc-relaxing-cat%22%3E%3C%2Fa%3E%0A%0A%3Cdiv%20class%3D%22gray-background%22%3E%0A%20%20%3Cp%3EThings%20cats%20love%3A%3C%2Fp%3E%0A%20%20%3Cul%3E%0A%20%20%20%20%3Cli%3Ecat%20nip%3C%2Fli%3E%0A%20%20%20%20%3Cli%3Elaser%20pointers%3C%2Fli%3E%0A%20%20%20%20%3Cli%3Elasagna%3C%2Fli%3E%0A%20%20%3C%2Ful%3E%0A%20%20%3Cp%3ETop%203%20things%20cats%20hate%3A%3C%2Fp%3E%0A%20%20%3Col%3E%0A%20%20%20%20%3Cli%3Eflea%20treatment%3C%2Fli%3E%0A%20%20%20%20%3Cli%3Ethunder%3C%2Fli%3E%0A%20%20%20%20%3Cli%3Eother%20cats%3C%2Fli%3E%0A%20%20%3C%2Fol%3E%0A%3C%2Fdiv%3E%0A%0A%3Cform%20fccfaa%3D%22%2Fsubmit-cat-photo%22%20id%3D%22cat-photo-form%22%3E%0A%20%20%3Clabel%3E%3Cinput%20type%3D%22radio%22%20name%3D%22indoor-outdoor%22%20checked%3E%20Indoor%3C%2Flabel%3E%0A%20%20%3Clabel%3E%3Cinput%20type%3D%22radio%22%20name%3D%22indoor-outdoor%22%3E%20Outdoor%3C%2Flabel%3E%0A%20%20%3Clabel%3E%3Cinput%20type%3D%22checkbox%22%20name%3D%22personality%22%20checked%3E%20Loving%3C%2Flabel%3E%0A%20%20%3Clabel%3E%3Cinput%20type%3D%22checkbox%22%20name%3D%22personality%22%3E%20Lazy%3C%2Flabel%3E%0A%20%20%3Clabel%3E%3Cinput%20type%3D%22checkbox%22%20name%3D%22personality%22%3E%20Energetic%3C%2Flabel%3E%0A%20%20%3Cinput%20type%3D%22text%22%20placeholder%3D%22cat%20photo%20URL%22%20required%3E%0A%20%20%3Cbutton%20type%3D%22submit%22%3ESubmit%3C%2Fbutton%3E%0A%3C%2Fform%3E%0A)
has an issue.
User Agent is: |code|Mozilla/5.0 (Windows NT 10.0; Win64; x64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75
Safari/537.36|/code|.
Please describe how to reproduce this issue, and include links to
screenshots if possible.
My code:
``` html
|link href="https://fonts.googleapis.com/css?family=Lobster"
rel="stylesheet" type="text/css"|
|style|
.red-text {
color: red;
}
#cat-photo-form{
background-color:green;
}
h2 {
font-family: Lobster, Monospace;
}
p {
font-size: 16px;
font-family: Monospace;
}
.thick-green-border {
border-color: green;
border-width: 10px;
border-style: solid;
border-radius: 50%;
}
.smaller-image {
width: 100px;
}
.gray-background {
background-color: gray;
}
|/style|
|h2 class="red-text"|CatPhotoApp|/h2||p|Click
here for |a href="#"|cat photos|/a|.|/p|
|a href="#"||img class="smaller-image thick-green-border" alt="A cute
orange cat lying on its back" src="https://bit.ly/fcc-relaxing-cat"||/a|
|div class="gray-background"|
|p|Things cats love:|/p|
|ul|
|li|cat nip|/li|
|li|laser pointers|/li|
|li|lasagna|/li|
|/ul|
|p|Top 3 things cats hate:|/p|
|ol|
|li|flea treatment|/li|
|li|thunder|/li|
|li|other cats|/li|
|/ol|
|/div|
|form action="/submit-cat-photo" id="cat-photo-form"|
|label||input type="radio" name="indoor-outdoor" checked| Indoor|/label|
|label||input type="radio" name="indoor-outdoor"| Outdoor|/label|
|label||input type="checkbox" name="personality" checked| Loving|/label|
|label||input type="checkbox" name="personality"| Lazy|/label|
|label||input type="checkbox" name="personality"| Energetic|/label|
|input type="text" placeholder="cat photo URL" required|
|button type="submit"|Submit|/button|
|/form|
```
@miguels4ntos Please name your issue appropriately.
A better title might be "Cursor initially appears in incorrect location on first focus in editor"
This happens for me, too, on various challenges, only when clicking one of the last lines in the editor.
Chrome/Mac
@BKinahan Done! Thanks!
空Cursor jumps randomly in Code Editor.
In all the exercises, we the users are forced to press the enter key
before writing any code.
----
#### Update:
We have locked the conversation temporarily on this thread to
collaborators only, this has been resolved in staging, and will be live
soon.
The fix can be confirmed on the beta website.
The workaround currently on production website is:
Press the |kbd|Enter|/kbd| key on the challenge editor and then proceed
with the challenge.
Apologies for the inconvenience meanwhile.
Reach us in the chat room if you need any assistance.
I could not reproduce this.
Could you provide some more information about your device / browser? I'm able to type within the editor as soon as a challenge loads, with the editor automatically taking focus, without needing to press enter.
It could be an issue specific to your setup.
I think I'm having the same issue with the editor.
In this exercise (please see image below) for example, I'm not able to focus some parts of the code before hitting enter (Neither with the mouse, nor with the keyboard). No matter where I'm clicking below line 54, the cursor jumps right to the beginning of line 54. After hitting "enter" once anywhere in the code, it all works fine!
I'm running latest Version of Chrome on Windows 10. I've noticed this issue in most of the exercices so far.

I have the same issue as smlabt no matter what exercise i'm doing. I'm using Firefox on Windows 10.
Line 56 input|would that work
For anyone having this issue please post your browser, os and challenge you first experienced this on. If possible a video of a GIF of the problem would be extremely helpful. Without this information the issue cannot be fixed.
Like I said I'm using Chrome on Windows 10.
Please try to reproduce the issue with a smaller window. I can imagine that this behaviour could have been caused by the line breaks.
I will try to create a gif asap.
@smlabt don't worry about the gif i have successfully reproduced it. Thank you for the help!
In some tasks the cursor won't go till last line ,it stopped going down before 3-5 line above ,so i had to make space above to complete it .(occurred mostly in html-css tasks)
My user agent is Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36 @BKinahan
Using this for the first time and
this usability issue it the first thing I noticed. It's true for all
challenges I've tried so far in the javascript path. The _only_ way I
can get to the code window is to click in. This is also true for scrolling the instructions panel.
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36
I just started learning to code on Code Free Camp. As I literally just started I am a bit anxious in even trying to offer my help in diagnosing the issue.
What I can offer is my experience. Each time I load a page, I click onto the code window and try to type at the clicked location.
The actual location that it types at is bellow (seeming randomly) a short distance down from where I originally clicked.
The only time the cursor failed to jump down was when I clicked on first line in the editor. The one it starts at when the page loads.
(edit)
I also wanted to add. The problem seemed to not be an issue again starting on "https://www.freecodecamp.com/challenges/create-a-bootstrap-headline"
but its still an issue if I go back to the previous lesson.
(250, 512) todo (2250,)
(500, 512) todo (2000,)
(750, 512) todo (1750,)
(1000, 512) todo (1500,)
(1250, 512) todo (1250,)
(1500, 512) todo (1000,)
(1750, 512) todo (750,)
(2000, 512) todo (500,)
(2250, 512) todo (250,)
(2500, 512) todo (0,)
True class: left
Text with highlighted words
Pushs and rebuilds fail after server was running out of disk space.
Our server was running out of disk space due to some docker diff files piling up (it
seems they need to be cleaned out regularly, but that is a different
topic). After making some space, `ps:rebuild` is not working anymore, it
just returns immediately with no output. And we can also not push
anymore to the server, we get:
```
! [remote rejected] dokku -| master (pre-receive hook declined)
error: failed to push some refs to 'dokku@.....:.....'
```
Help is greatly appreciated.
dokku report
-----| uname: Linux ubuntu-512mb-fra1-01 3.13.0-77-generic #121-Ubuntu
SMP Wed Jan 20 10:50:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
-----| memory:
total used free shared buffers
cached
Mem: 994 647 346 10 19
111
-/+ buffers/cache: 515 478
Swap: 999 112 887
-----| docker version:
Client:
Version: 17.05.0-ce
API version: 1.29
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:06:06 2017
OS/Arch: linux/amd64
Server:
Version: 17.05.0-ce
API version: 1.29 (minimum version 1.12)
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:06:06 2017
OS/Arch: linux/amd64
Experimental: false
-----| docker daemon info:
Containers: 12
Running: 5
Paused: 0
Stopped: 7
Images: 32
Server Version: 17.05.0-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 1133
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
apparmor
Kernel Version: 3.13.0-77-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 994MiB
Name: ubuntu-512mb-fra1-01
ID: 5XQF:UC63:3ZUV:GI2Y:JEWP:NJHA:NQ6X:5IKC:RRFA:XLTA:IILX:HBUZ
Docker Root Dir: /var/lib/docker
Debug Mode (client): true
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
-----| sigil version: 0.4.0
-----| herokuish version:
herokuish: 0.3.31
buildpacks:
heroku-buildpack-multi v1.0.0
heroku-buildpack-ruby v163
heroku-buildpack-nodejs v99
heroku-buildpack-clojure v76
heroku-buildpack-python v109
heroku-buildpack-java v52
heroku-buildpack-gradle v22
heroku-buildpack-grails v21
heroku-buildpack-scala v76
heroku-buildpack-play v26
heroku-buildpack-php v121
heroku-buildpack-go v69
heroku-buildpack-erlang fa17af9
buildpack-nginx v8
-----| dokku version: 0.5.7
-----| dokku plugins:
plugn: 0.3.0
00_dokku-standard 0.5.7 enabled dokku core standard plugin
20_events 0.5.7 enabled dokku core events logging plugin
apps 0.5.7 enabled dokku core apps plugin
build-env 0.5.7 enabled dokku core build-env plugin
certs 0.5.7 enabled dokku core certificate management plugin
checks 0.5.7 enabled dokku core checks plugin
common 0.5.7 enabled dokku core common plugin
config 0.5.7 enabled dokku core config plugin
docker-options 0.5.7 enabled dokku core docker-options plugin
domains 0.5.7 enabled dokku core domains plugin
enter 0.5.7 enabled dokku core enter plugin
git 0.5.7 enabled dokku core git plugin
letsencrypt 0.8.6 enabled Automated installation of let's encrypt TLS certificates
logs 0.5.7 enabled dokku core logs plugin
memcached 1.0.0 enabled dokku memcached service plugin
named-containers 0.5.7 enabled dokku core named containers plugin
nginx-vhosts 0.5.7 enabled dokku core nginx-vhosts plugin
plugin 0.5.7 enabled dokku core plugin plugin
postgres 1.0.0 enabled dokku postgres service plugin
proxy 0.5.7 enabled dokku core proxy plugin
ps 0.5.7 enabled dokku core ps plugin
shell 0.5.7 enabled dokku core shell plugin
storage 0.5.7 enabled dokku core storage plugin
tags 0.5.7 enabled dokku core tags plugin
tar 0.5.7 enabled dokku core tar plugin
Environment details (AWS, VirtualBox, physical, etc.): Digital Ocean
Can you turn on trace mode and try deploying?
Okay, this is what I get when pushing:
```
+ case "$(lsb_release -si)" in
++ lsb_release -si
+ export DOKKU_DISTRO=ubuntu
+ DOKKU_DISTRO=ubuntu
+ export DOKKU_IMAGE=gliderlabs/herokuish
+ DOKKU_IMAGE=gliderlabs/herokuish
+ export DOKKU_LIB_ROOT=/var/lib/dokku
+ DOKKU_LIB_ROOT=/var/lib/dokku
+ export PLUGIN_PATH=/var/lib/dokku/plugins
+ PLUGIN_PATH=/var/lib/dokku/plugins
+ export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ export DOKKU_API_VERSION=1
+ DOKKU_API_VERSION=1
+ export DOKKU_NOT_IMPLEMENTED_EXIT=10
+ DOKKU_NOT_IMPLEMENTED_EXIT=10
+ export DOKKU_VALID_EXIT=0
+ DOKKU_VALID_EXIT=0
+ export DOKKU_LOGS_DIR=/var/log/dokku
+ DOKKU_LOGS_DIR=/var/log/dokku
+ export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ export DOKKU_CONTAINER_LABEL=dokku
+ DOKKU_CONTAINER_LABEL=dokku
+ export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ parse_args git-receive-pack ''\''myapp'\'''
+ declare 'desc=top-level cli arg parser'
+ local next_index=1
+ local skip=false
+ args=("$@")
+ local args
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=2
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=3
+ return 0
+ args=("$@")
+ [[ git-receive-pack =~ ^--.* ]]
+ has_tty
+ declare 'desc=return 0 if we have a tty'
++ /usr/bin/tty
++ true
+ [[ not a tty == \n\o\t\ \a\ \t\t\y ]]
+ return 1
+ DOKKU_QUIET_OUTPUT=1
++ id -un
+ [[ dokku != \d\o\k\k\u ]]
++ id -un
+ [[ dokku != \r\o\o\t ]]
+ [[ git-receive-pack =~ ^plugin:.* ]]
+ [[ -n git-receive-pack 'myapp' ]]
+ export -n SSH_ORIGINAL_COMMAND
+ [[ git-receive-pack =~ config-* ]]
+ [[ git-receive-pack =~ docker-options* ]]
+ set -f
+ /usr/local/bin/dokku git-receive-pack ''\''myapp'\'''
+ case "$(lsb_release -si)" in
++ lsb_release -si
+ export DOKKU_DISTRO=ubuntu
+ DOKKU_DISTRO=ubuntu
+ export DOKKU_IMAGE=gliderlabs/herokuish
+ DOKKU_IMAGE=gliderlabs/herokuish
+ export DOKKU_LIB_ROOT=/var/lib/dokku
+ DOKKU_LIB_ROOT=/var/lib/dokku
+ export PLUGIN_PATH=/var/lib/dokku/plugins
+ PLUGIN_PATH=/var/lib/dokku/plugins
+ export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ export DOKKU_API_VERSION=1
+ DOKKU_API_VERSION=1
+ export DOKKU_NOT_IMPLEMENTED_EXIT=10
+ DOKKU_NOT_IMPLEMENTED_EXIT=10
+ export DOKKU_VALID_EXIT=0
+ DOKKU_VALID_EXIT=0
+ export DOKKU_LOGS_DIR=/var/log/dokku
+ DOKKU_LOGS_DIR=/var/log/dokku
+ export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ export DOKKU_CONTAINER_LABEL=dokku
+ DOKKU_CONTAINER_LABEL=dokku
+ export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ parse_args git-receive-pack ''\''myapp'\'''
+ declare 'desc=top-level cli arg parser'
+ local next_index=1
+ local skip=false
+ args=("$@")
+ local args
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=2
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=3
+ return 0
+ args=("$@")
+ [[ git-receive-pack =~ ^--.* ]]
+ has_tty
+ declare 'desc=return 0 if we have a tty'
++ /usr/bin/tty
++ true
+ [[ not a tty == \n\o\t\ \a\ \t\t\y ]]
+ return 1
+ DOKKU_QUIET_OUTPUT=1
++ id -un
+ [[ dokku != \d\o\k\k\u ]]
++ id -un
+ [[ dokku != \r\o\o\t ]]
+ [[ git-receive-pack =~ ^plugin:.* ]]
+ [[ -n '' ]]
+ dokku_auth git-receive-pack ''\''myapp'\'''
+ declare 'desc=calls user-auth plugin trigger'
+ export SSH_USER=dokku
+ SSH_USER=dokku
+ export 'SSH_NAME=[nico]'
+ SSH_NAME='[nico]'
+ plugn trigger user-auth dokku '[nico]' git-receive-pack ''\''myapp'\'''
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ [[ ! -n '' ]]
+ return 0
+ case "$1" in
+ execute_dokku_cmd git-receive-pack ''\''myapp'\'''
+ declare 'desc=executes dokku sub-commands'
+ local PLUGIN_NAME=git-receive-pack
+ local PLUGIN_CMD=git-receive-pack
+ local implemented=0
+ local script
+ argv=("$@")
+ local argv
+ case "$PLUGIN_NAME" in
++ readlink -f /var/lib/dokku/plugins/enabled/git-receive-pack
+ [[ /var/lib/dokku/plugins/enabled/git-receive-pack == *core-plugins* ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-receive-pack/subcommands/default ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-receive-pack/subcommands/git-receive-pack ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-receive-pack/subcommands/git-receive-pack ]]
+ [[ 0 -eq 0 ]]
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/00_dokku-standard/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/20_events/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/apps/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/certs/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/checks/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/config/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/docker-options/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/domains/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/enter/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/git/commands git-receive-pack ''\''myapp'\'''
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ source /var/lib/dokku/plugins/available/apps/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
++ source /var/lib/dokku/core-plugins/available/common/functions
+++ set -eo pipefail
+++ [[ -n 1 ]]
+++ set -x
++ source /var/lib/dokku/plugins/available/config/functions
+++ set -eo pipefail
+++ [[ -n 1 ]]
+++ set -x
+++ source /var/lib/dokku/core-plugins/available/common/functions
++++ set -eo pipefail
++++ [[ -n 1 ]]
++++ set -x
+ source /var/lib/dokku/plugins/available/config/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
++ source /var/lib/dokku/core-plugins/available/common/functions
+++ set -eo pipefail
+++ [[ -n 1 ]]
+++ set -x
+ case "$1" in
+ git_glob_cmd git-receive-pack ''\''myapp'\'''
+ declare 'desc=catch-all for any other git-* commands'
+ local 'cmd=git-*'
++ sed 's/^\///g'
++ sed 's/\\'\''/'\''/g'
++ perl -pe 's/(?|!\\)'\''//g'
++ echo ''\''myapp'\'''
+ local APP=myapp
+ local APP_PATH=/home/dokku/myapp
+ [[ git-receive-pack == \g\i\t\-\r\e\c\e\i\v\e\-\p\a\c\k ]]
+ [[ ! -d /home/dokku/myapp/refs ]]
+ [[ git-receive-pack == \g\i\t\-\r\e\c\e\i\v\e\-\p\a\c\k ]]
+ local 'args=git-receive-pack '\''/home/dokku/myapp'\'''
+ git-shell -c 'git-receive-pack '\''/home/dokku/myapp'\'''
Total 0 (delta 0), reused 0 (delta 0)
remote: + case "$(lsb_release -si)" in
remote: ++ lsb_release -si
remote: + export DOKKU_DISTRO=ubuntu
remote: + DOKKU_DISTRO=ubuntu
remote: + export DOKKU_IMAGE=gliderlabs/herokuish
remote: + DOKKU_IMAGE=gliderlabs/herokuish
remote: + export DOKKU_LIB_ROOT=/var/lib/dokku
remote: + DOKKU_LIB_ROOT=/var/lib/dokku
remote: + export PLUGIN_PATH=/var/lib/dokku/plugins
remote: + PLUGIN_PATH=/var/lib/dokku/plugins
remote: + export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
remote: + PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
remote: + export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
remote: + PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
remote: + export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
remote: + PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
remote: + export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
remote: + PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
remote: + export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
remote: + PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
remote: + export DOKKU_API_VERSION=1
remote: + DOKKU_API_VERSION=1
remote: + export DOKKU_NOT_IMPLEMENTED_EXIT=10
remote: + DOKKU_NOT_IMPLEMENTED_EXIT=10
remote: + export DOKKU_VALID_EXIT=0
remote: + DOKKU_VALID_EXIT=0
remote: + export DOKKU_LOGS_DIR=/var/log/dokku
remote: + DOKKU_LOGS_DIR=/var/log/dokku
remote: + export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
remote: + DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
remote: + export DOKKU_CONTAINER_LABEL=dokku
remote: + DOKKU_CONTAINER_LABEL=dokku
remote: + export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
remote: + DOKKU_GLOBAL_RUN_ARGS=--label=dokku
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + parse_args git-hook myapp
remote: + declare 'desc=top-level cli arg parser'
remote: + local next_index=1
remote: + local skip=false
remote: + args=("$@")
remote: + local args
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=2
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=3
remote: + return 0
remote: + args=("$@")
remote: + [[ git-hook =~ ^--.* ]]
remote: + has_tty
remote: + declare 'desc=return 0 if we have a tty'
remote: ++ /usr/bin/tty
remote: ++ true
remote: + [[ not a tty == \n\o\t\ \a\ \t\t\y ]]
remote: + return 1
remote: + DOKKU_QUIET_OUTPUT=1
remote: ++ id -un
remote: + [[ dokku != \d\o\k\k\u ]]
remote: ++ id -un
remote: + [[ dokku != \r\o\o\t ]]
remote: + [[ git-hook =~ ^plugin:.* ]]
remote: + [[ -n '' ]]
remote: + dokku_auth git-hook myapp
remote: + declare 'desc=calls user-auth plugin trigger'
remote: + export SSH_USER=dokku
remote: + SSH_USER=dokku
remote: + export 'SSH_NAME=[nico]'
remote: + SSH_NAME='[nico]'
remote: + plugn trigger user-auth dokku '[nico]' git-hook myapp
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + [[ ! -n '' ]]
remote: + return 0
remote: + case "$1" in
remote: + execute_dokku_cmd git-hook myapp
remote: + declare 'desc=executes dokku sub-commands'
remote: + local PLUGIN_NAME=git-hook
remote: + local PLUGIN_CMD=git-hook
remote: + local implemented=0
remote: + local script
remote: + argv=("$@")
remote: + local argv
remote: + case "$PLUGIN_NAME" in
remote: ++ readlink -f /var/lib/dokku/plugins/enabled/git-hook
remote: + [[ /var/lib/dokku/plugins/enabled/git-hook == *core-plugins* ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-hook/subcommands/default ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-hook/subcommands/git-hook ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-hook/subcommands/git-hook ]]
remote: + [[ 0 -eq 0 ]]
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/00_dokku-standard/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/20_events/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/apps/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/certs/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/checks/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/config/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/docker-options/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/domains/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/enter/commands git-hook myapp
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/git/commands git-hook myapp
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + source /var/lib/dokku/plugins/available/apps/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: ++ source /var/lib/dokku/core-plugins/available/common/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: ++ source /var/lib/dokku/plugins/available/config/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: +++ source /var/lib/dokku/core-plugins/available/common/functions
remote: ++++ set -eo pipefail
remote: ++++ [[ -n 1 ]]
remote: ++++ set -x
remote: + source /var/lib/dokku/plugins/available/config/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: ++ source /var/lib/dokku/core-plugins/available/common/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: + case "$1" in
remote: + git_hook_cmd git-hook myapp
remote: + declare 'desc=kick off receive-app trigger from git prereceive hook'
remote: + local cmd=git-hook
remote: + local APP=myapp
remote: + local oldrev newrev refname
remote: + read -r oldrev newrev refname
remote: + [[ refs/heads/master = \r\e\f\s\/\h\e\a\d\s\/\m\a\s\t\e\r ]]
remote: + plugn trigger receive-app myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + [[ ! -n '' ]]
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + git_receive_app myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=git receive-app plugin trigger'
remote: + local trigger=git_receive_app
remote: + local APP=myapp
remote: + local REV=b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + [[ ! -d /home/dokku/myapp/refs ]]
remote: + dokku git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + case "$(lsb_release -si)" in
remote: ++ lsb_release -si
remote: + export DOKKU_DISTRO=ubuntu
remote: + DOKKU_DISTRO=ubuntu
remote: + export DOKKU_IMAGE=gliderlabs/herokuish
remote: + DOKKU_IMAGE=gliderlabs/herokuish
remote: + export DOKKU_LIB_ROOT=/var/lib/dokku
remote: + DOKKU_LIB_ROOT=/var/lib/dokku
remote: + export PLUGIN_PATH=/var/lib/dokku/plugins
remote: + PLUGIN_PATH=/var/lib/dokku/plugins
remote: + export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
remote: + PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
remote: + export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
remote: + PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
remote: + export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
remote: + PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
remote: + export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
remote: + PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
remote: + export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
remote: + PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
remote: + export DOKKU_API_VERSION=1
remote: + DOKKU_API_VERSION=1
remote: + export DOKKU_NOT_IMPLEMENTED_EXIT=10
remote: + DOKKU_NOT_IMPLEMENTED_EXIT=10
remote: + export DOKKU_VALID_EXIT=0
remote: + DOKKU_VALID_EXIT=0
remote: + export DOKKU_LOGS_DIR=/var/log/dokku
remote: + DOKKU_LOGS_DIR=/var/log/dokku
remote: + export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
remote: + DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
remote: + export DOKKU_CONTAINER_LABEL=dokku
remote: + DOKKU_CONTAINER_LABEL=dokku
remote: + export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
remote: + DOKKU_GLOBAL_RUN_ARGS=--label=dokku
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + parse_args git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=top-level cli arg parser'
remote: + local next_index=1
remote: + local skip=false
remote: + args=("$@")
remote: + local args
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=2
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=3
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=4
remote: + return 0
remote: + args=("$@")
remote: + [[ git-build =~ ^--.* ]]
remote: + has_tty
remote: + declare 'desc=return 0 if we have a tty'
remote: ++ /usr/bin/tty
remote: ++ true
remote: + [[ not a tty == \n\o\t\ \a\ \t\t\y ]]
remote: + return 1
remote: + DOKKU_QUIET_OUTPUT=1
remote: ++ id -un
remote: + [[ dokku != \d\o\k\k\u ]]
remote: ++ id -un
remote: + [[ dokku != \r\o\o\t ]]
remote: + [[ git-build =~ ^plugin:.* ]]
remote: + [[ -n '' ]]
remote: + dokku_auth git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=calls user-auth plugin trigger'
remote: + export SSH_USER=dokku
remote: + SSH_USER=dokku
remote: + export 'SSH_NAME=[nico]'
remote: + SSH_NAME='[nico]'
remote: + plugn trigger user-auth dokku '[nico]' git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + [[ ! -n '' ]]
remote: + return 0
remote: + case "$1" in
remote: + execute_dokku_cmd git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=executes dokku sub-commands'
remote: + local PLUGIN_NAME=git-build
remote: + local PLUGIN_CMD=git-build
remote: + local implemented=0
remote: + local script
remote: + argv=("$@")
remote: + local argv
remote: + case "$PLUGIN_NAME" in
remote: ++ readlink -f /var/lib/dokku/plugins/enabled/git-build
remote: + [[ /var/lib/dokku/plugins/enabled/git-build == *core-plugins* ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-build/subcommands/default ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-build/subcommands/git-build ]]
remote: + [[ -x /var/lib/dokku/plugins/enabled/git-build/subcommands/git-build ]]
remote: + [[ 0 -eq 0 ]]
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/00_dokku-standard/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/20_events/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/apps/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/certs/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/checks/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/config/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/docker-options/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/domains/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/enter/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/git/commands git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + source /var/lib/dokku/plugins/available/apps/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: ++ source /var/lib/dokku/core-plugins/available/common/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: ++ source /var/lib/dokku/plugins/available/config/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: +++ source /var/lib/dokku/core-plugins/available/common/functions
remote: ++++ set -eo pipefail
remote: ++++ [[ -n 1 ]]
remote: ++++ set -x
remote: + source /var/lib/dokku/plugins/available/config/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: ++ source /var/lib/dokku/core-plugins/available/common/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: + case "$1" in
remote: + git_build_cmd git-build myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=lock git-build'
remote: + local cmd=git-build
remote: + local APP=myapp
remote: + local APP_BUILD_LOCK=/home/dokku/myapp/.build.lock
remote: + local 'APP_BUILD_LOCK_MSG=myapp is currently being deployed or locked. Waiting...'
remote: ++ flock -n /home/dokku/myapp/.build.lock true
remote: ++ echo 0
remote: + [[ 0 -ne 0 ]]
remote: + shift 1
remote: + flock -o /home/dokku/myapp/.build.lock dokku git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + case "$(lsb_release -si)" in
remote: ++ lsb_release -si
remote: + export DOKKU_DISTRO=ubuntu
remote: + DOKKU_DISTRO=ubuntu
remote: + export DOKKU_IMAGE=gliderlabs/herokuish
remote: + DOKKU_IMAGE=gliderlabs/herokuish
remote: + export DOKKU_LIB_ROOT=/var/lib/dokku
remote: + DOKKU_LIB_ROOT=/var/lib/dokku
remote: + export PLUGIN_PATH=/var/lib/dokku/plugins
remote: + PLUGIN_PATH=/var/lib/dokku/plugins
remote: + export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
remote: + PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
remote: + export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
remote: + PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
remote: + export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
remote: + PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
remote: + export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
remote: + PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
remote: + export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
remote: + PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
remote: + export DOKKU_API_VERSION=1
remote: + DOKKU_API_VERSION=1
remote: + export DOKKU_NOT_IMPLEMENTED_EXIT=10
remote: + DOKKU_NOT_IMPLEMENTED_EXIT=10
remote: + export DOKKU_VALID_EXIT=0
remote: + DOKKU_VALID_EXIT=0
remote: + export DOKKU_LOGS_DIR=/var/log/dokku
remote: + DOKKU_LOGS_DIR=/var/log/dokku
remote: + export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
remote: + DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
remote: + export DOKKU_CONTAINER_LABEL=dokku
remote: + DOKKU_CONTAINER_LABEL=dokku
remote: + export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
remote: + DOKKU_GLOBAL_RUN_ARGS=--label=dokku
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + parse_args git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=top-level cli arg parser'
remote: + local next_index=1
remote: + local skip=false
remote: + args=("$@")
remote: + local args
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=2
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=3
remote: + for arg in '"$@"'
remote: + false
remote: + case "$arg" in
remote: + local next_index=4
remote: + return 0
remote: + args=("$@")
remote: + [[ git-build-locked =~ ^--.* ]]
remote: + has_tty
remote: + declare 'desc=return 0 if we have a tty'
remote: ++ /usr/bin/tty
remote: ++ true
remote: + [[ not a tty == \n\o\t\ \a\ \t\t\y ]]
remote: + return 1
remote: + DOKKU_QUIET_OUTPUT=1
remote: ++ id -un
remote: + [[ dokku != \d\o\k\k\u ]]
remote: ++ id -un
remote: + [[ dokku != \r\o\o\t ]]
remote: + [[ git-build-locked =~ ^plugin:.* ]]
remote: + [[ -n '' ]]
remote: + dokku_auth git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=calls user-auth plugin trigger'
remote: + export SSH_USER=dokku
remote: + SSH_USER=dokku
remote: + export 'SSH_NAME=[nico]'
remote: + SSH_NAME='[nico]'
remote: + plugn trigger user-auth dokku '[nico]' git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + [[ ! -n '' ]]
remote: + return 0
remote: + case "$1" in
remote: + execute_dokku_cmd git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=executes dokku sub-commands'
remote: + local PLUGIN_NAME=git-build-locked
remote: + local PLUGIN_CMD=git-build-locked
remote: + local implemented=0
remote: + local script
remote: + argv=("$@")
remote: + local argv
remote: + case "$PLUGIN_NAME" in
remote: ++ readlink -f /var/lib/dokku/plugins/enabled/git-build-locked
remote:
+ [[ /var/lib/dokku/plugins/enabled/git-build-locked == *core-plugins*
]]
remote: + [[ -x
/var/lib/dokku/plugins/enabled/git-build-locked/subcommands/default ]]
remote: + [[ -x
/var/lib/dokku/plugins/enabled/git-build-locked/subcommands/git-build-locked
]]
remote: + [[ -x
/var/lib/dokku/plugins/enabled/git-build-locked/subcommands/git-build-locked
]]
remote: + [[ 0 -eq 0 ]]
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/00_dokku-standard/commands
git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/20_events/commands
git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/apps/commands git-build-locked
myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/certs/commands git-build-locked
myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/checks/commands
git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/config/commands
git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote:
+ set +e
remote: + /var/lib/dokku/plugins/enabled/docker-options/commands
git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/domains/commands
git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/enter/commands git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + exit_code=10
remote: + set -e
remote: + [[ 10 -eq 10 ]]
remote: + continue
remote: + for script in '$PLUGIN_ENABLED_PATH/*/commands'
remote: + set +e
remote: + /var/lib/dokku/plugins/enabled/git/commands git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + source /var/lib/dokku/core-plugins/available/common/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: + source /var/lib/dokku/plugins/available/apps/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: ++ source /var/lib/dokku/core-plugins/available/common/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: ++ source /var/lib/dokku/plugins/available/config/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: +++ source /var/lib/dokku/core-plugins/available/common/functions
remote: ++++ set -eo pipefail
remote: ++++ [[ -n 1 ]]
remote: ++++ set -x
remote: + source /var/lib/dokku/plugins/available/config/functions
remote: ++ set -eo pipefail
remote: ++ [[ -n 1 ]]
remote: ++ set -x
remote: ++ source /var/lib/dokku/core-plugins/available/common/functions
remote: +++ set -eo pipefail
remote: +++ [[ -n 1 ]]
remote: +++ set -x
remote: + case "$1" in
remote: + git_build_locked_cmd git-build-locked myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=setup and call git_build_app_repo'
remote: + local cmd=git-build-locked
remote: + local APP=myapp
remote: + [[ 3 -ge 3 ]]
remote: + local REF=b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + git_build_app_repo myapp b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + declare 'desc=builds local git app repo for app'
remote: + verify_app_name myapp
remote: + declare 'desc=verify app name format and app existence'
remote: + local APP=myapp
remote: + [[ ! -n myapp ]]
remote: + [[ ! myapp =~ ^[a-z].* ]]
remote: + [[ ! -d /home/dokku/myapp ]]
remote: + return 0
remote: + local APP=myapp
remote: + local REV=b4287d41e948d5bd423edd1d5528d86784678d8c
remote: ++ mktemp -d /tmp/dokku_git.XXXX
remote: + local GIT_BUILD_APP_REPO_TMP_WORK_DIR=/tmp/dokku_git.kXDG
remote: + trap 'rm -rf "$GIT_BUILD_APP_REPO_TMP_WORK_DIR" | /dev/null' RETURN INT TERM EXIT
remote: + local TMP_TAG=dokku/b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + chmod 755 /tmp/dokku_git.kXDG
remote: + unset GIT_DIR GIT_WORK_TREE
remote: + pushd /tmp/dokku_git.kXDG
remote: + [[ ! -d /home/dokku/myapp ]]
remote: + GIT_DIR=/home/dokku/myapp
remote: + git tag -d dokku/b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + GIT_DIR=/home/dokku/myapp
remote: + git tag dokku/b4287d41e948d5bd423edd1d5528d86784678d8c b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + git init
remote: + git config advice.detachedHead false
remote: + git remote add origin /home/dokku/myapp
remote: + git fetch --depth=1 origin refs/tags/dokku/b4287d41e948d5bd423edd1d5528d86784678d8c
remote: + rm -rf /tmp/dokku_git.kXDG
remote: + exit_code=141
remote: + set -e
remote: + [[ 141 -eq 10 ]]
remote: + implemented=1
remote: + [[ 141 -ne 0 ]]
remote: + exit 141
remote: + exit_code=141
remote: + set -e
remote: + [[ 141 -eq 10 ]]
remote: + implemented=1
remote: + [[ 141 -ne 0 ]]
remote: + exit 141
remote: + exit_code=141
remote: + set -e
remote: + [[ 141 -eq 10 ]]
remote: + implemented=1
remote: + [[ 141 -ne 0 ]]
remote: + exit 141
+ exit_code=0
+ set -e
+ [[ 0 -eq 10 ]]
+ implemented=1
+ [[ 0 -ne 0 ]]
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/letsencrypt/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/logs/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/memcached/commands git-receive-pack ''\''myapp'\'''
+ PLUGIN_BASE_PATH=/var/lib/dokku/plugins
+ [[ -n 1 ]]
+ PLUGIN_BASE_PATH=/var/lib/dokku/plugins/enabled
+ source /var/lib/dokku/plugins/enabled/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
++ dirname /var/lib/dokku/plugins/enabled/memcached/commands
+ source /var/lib/dokku/plugins/enabled/memcached/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+++ dirname /var/lib/dokku/plugins/enabled/memcached/commands
++ source /var/lib/dokku/plugins/enabled/memcached/config
+++ export MEMCACHED_IMAGE=memcached
+++ MEMCACHED_IMAGE=memcached
+++ export MEMCACHED_IMAGE_VERSION=1.4.25
+++ MEMCACHED_IMAGE_VERSION=1.4.25
+++ export MEMCACHED_ROOT=/var/lib/dokku/services/memcached
+++ MEMCACHED_ROOT=/var/lib/dokku/services/memcached
+++ export PLUGIN_COMMAND_PREFIX=memcached
+++ PLUGIN_COMMAND_PREFIX=memcached
+++ export PLUGIN_DATA_ROOT=/var/lib/dokku/services/memcached
+++ PLUGIN_DATA_ROOT=/var/lib/dokku/services/memcached
+++ PLUGIN_DATASTORE_PORTS=(11211)
+++ export PLUGIN_DATASTORE_PORTS
+++ export PLUGIN_DEFAULT_ALIAS=MEMCACHED
+++ PLUGIN_DEFAULT_ALIAS=MEMCACHED
+++ export PLUGIN_ALT_ALIAS=DOKKU_MEMCACHED
+++ PLUGIN_ALT_ALIAS=DOKKU_MEMCACHED
+++ export PLUGIN_IMAGE=memcached
+++ PLUGIN_IMAGE=memcached
+++ export PLUGIN_IMAGE_VERSION=1.4.25
+++ PLUGIN_IMAGE_VERSION=1.4.25
+++ export PLUGIN_SCHEME=memcached
+++ PLUGIN_SCHEME=memcached
+++ export PLUGIN_SERVICE=Memcached
+++ PLUGIN_SERVICE=Memcached
++ dirname /var/lib/dokku/plugins/enabled/memcached/commands
+ source /var/lib/dokku/plugins/enabled/memcached/config
++ export MEMCACHED_IMAGE=memcached
++ MEMCACHED_IMAGE=memcached
++ export MEMCACHED_IMAGE_VERSION=1.4.25
++ MEMCACHED_IMAGE_VERSION=1.4.25
++ export MEMCACHED_ROOT=/var/lib/dokku/services/memcached
++ MEMCACHED_ROOT=/var/lib/dokku/services/memcached
++ export PLUGIN_COMMAND_PREFIX=memcached
++ PLUGIN_COMMAND_PREFIX=memcached
++ export PLUGIN_DATA_ROOT=/var/lib/dokku/services/memcached
++ PLUGIN_DATA_ROOT=/var/lib/dokku/services/memcached
++ PLUGIN_DATASTORE_PORTS=(11211)
++ export PLUGIN_DATASTORE_PORTS
++ export PLUGIN_DEFAULT_ALIAS=MEMCACHED
++ PLUGIN_DEFAULT_ALIAS=MEMCACHED
++ export PLUGIN_ALT_ALIAS=DOKKU_MEMCACHED
++ PLUGIN_ALT_ALIAS=DOKKU_MEMCACHED
++ export PLUGIN_IMAGE=memcached
++ PLUGIN_IMAGE=memcached
++ export PLUGIN_IMAGE_VERSION=1.4.25
++ PLUGIN_IMAGE_VERSION=1.4.25
++ export PLUGIN_SCHEME=memcached
++ PLUGIN_SCHEME=memcached
++ export PLUGIN_SERVICE=Memcached
++ PLUGIN_SERVICE=Memcached
+ [[ git-receive-pack == memcached:* ]]
+ [[ -d /var/lib/dokku/services/memcached/* ]]
+ case "$1" in
+ exit 10
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/nginx-vhosts/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/plugin/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/postgres/commands git-receive-pack ''\''myapp'\'''
+ PLUGIN_BASE_PATH=/var/lib/dokku/plugins
+ [[ -n 1 ]]
+ PLUGIN_BASE_PATH=/var/lib/dokku/plugins/enabled
+ source /var/lib/dokku/plugins/enabled/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
++ dirname /var/lib/dokku/plugins/enabled/postgres/commands
+ source /var/lib/dokku/plugins/enabled/postgres/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+++ dirname /var/lib/dokku/plugins/enabled/postgres/commands
++ source /var/lib/dokku/plugins/enabled/postgres/config
+++ export POSTGRES_IMAGE=postgres
+++ POSTGRES_IMAGE=postgres
+++ export POSTGRES_IMAGE_VERSION=9.5.0
+++ POSTGRES_IMAGE_VERSION=9.5.0
+++ export POSTGRES_ROOT=/var/lib/dokku/services/postgres
+++ POSTGRES_ROOT=/var/lib/dokku/services/postgres
+++ export PLUGIN_COMMAND_PREFIX=postgres
+++ PLUGIN_COMMAND_PREFIX=postgres
+++ export PLUGIN_DATA_ROOT=/var/lib/dokku/services/postgres
+++ PLUGIN_DATA_ROOT=/var/lib/dokku/services/postgres
+++ PLUGIN_DATASTORE_PORTS=(5432)
+++ export PLUGIN_DATASTORE_PORTS
+++ export PLUGIN_DEFAULT_ALIAS=DATABASE
+++ PLUGIN_DEFAULT_ALIAS=DATABASE
+++ export PLUGIN_ALT_ALIAS=DOKKU_POSTGRES
+++ PLUGIN_ALT_ALIAS=DOKKU_POSTGRES
+++ export PLUGIN_IMAGE=postgres
+++ PLUGIN_IMAGE=postgres
+++ export PLUGIN_IMAGE_VERSION=9.5.0
+++ PLUGIN_IMAGE_VERSION=9.5.0
+++ export PLUGIN_SCHEME=postgres
+++ PLUGIN_SCHEME=postgres
+++ export PLUGIN_SERVICE=Postgres
+++ PLUGIN_SERVICE=Postgres
++ dirname /var/lib/dokku/plugins/enabled/postgres/commands
+ source /var/lib/dokku/plugins/enabled/postgres/config
++ export POSTGRES_IMAGE=postgres
++ POSTGRES_IMAGE=postgres
++ export POSTGRES_IMAGE_VERSION=9.5.0
++ POSTGRES_IMAGE_VERSION=9.5.0
++ export POSTGRES_ROOT=/var/lib/dokku/services/postgres
++ POSTGRES_ROOT=/var/lib/dokku/services/postgres
++ export PLUGIN_COMMAND_PREFIX=postgres
++ PLUGIN_COMMAND_PREFIX=postgres
++ export PLUGIN_DATA_ROOT=/var/lib/dokku/services/postgres
++ PLUGIN_DATA_ROOT=/var/lib/dokku/services/postgres
++ PLUGIN_DATASTORE_PORTS=(5432)
++ export PLUGIN_DATASTORE_PORTS
++ export PLUGIN_DEFAULT_ALIAS=DATABASE
++ PLUGIN_DEFAULT_ALIAS=DATABASE
++ export PLUGIN_ALT_ALIAS=DOKKU_POSTGRES
++ PLUGIN_ALT_ALIAS=DOKKU_POSTGRES
++ export PLUGIN_IMAGE=postgres
++ PLUGIN_IMAGE=postgres
++ export PLUGIN_IMAGE_VERSION=9.5.0
++ PLUGIN_IMAGE_VERSION=9.5.0
++ export PLUGIN_SCHEME=postgres
++ PLUGIN_SCHEME=postgres
++ export PLUGIN_SERVICE=Postgres
++ PLUGIN_SERVICE=Postgres
+ [[ git-receive-pack == postgres:* ]]
+ [[ -d /var/lib/dokku/services/postgres/* ]]
+ case "$1" in
+ exit 10
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/proxy/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/ps/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/shell/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/storage/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/tags/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/tar/commands git-receive-pack ''\''myapp'\'''
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ [[ 1 -eq 0 ]]
+ set +f
+ exit 0
```
And this when rebuilding:
```
+ case "$(lsb_release -si)" in
++ lsb_release -si
+ export DOKKU_DISTRO=ubuntu
+ DOKKU_DISTRO=ubuntu
+ export DOKKU_IMAGE=gliderlabs/herokuish
+ DOKKU_IMAGE=gliderlabs/herokuish
+ export DOKKU_LIB_ROOT=/var/lib/dokku
+ DOKKU_LIB_ROOT=/var/lib/dokku
+ export PLUGIN_PATH=/var/lib/dokku/plugins
+ PLUGIN_PATH=/var/lib/dokku/plugins
+ export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ export DOKKU_API_VERSION=1
+ DOKKU_API_VERSION=1
+ export DOKKU_NOT_IMPLEMENTED_EXIT=10
+ DOKKU_NOT_IMPLEMENTED_EXIT=10
+ export DOKKU_VALID_EXIT=0
+ DOKKU_VALID_EXIT=0
+ export DOKKU_LOGS_DIR=/var/log/dokku
+ DOKKU_LOGS_DIR=/var/log/dokku
+ export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ export DOKKU_CONTAINER_LABEL=dokku
+ DOKKU_CONTAINER_LABEL=dokku
+ export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ parse_args ps:rebuild myapp
+ declare 'desc=top-level cli arg parser'
+ local next_index=1
+ local skip=false
+ args=("$@")
+ local args
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=2
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=3
+ return 0
+ args=("$@")
+ [[ ps:rebuild =~ ^--.* ]]
+ has_tty
+ declare 'desc=return 0 if we have a tty'
++ /usr/bin/tty
+ [[ /dev/pts/0 == \n\o\t\ \a\ \t\t\y ]]
+ return 0
++ id -un
+ [[ root != \d\o\k\k\u ]]
+ [[ ! ps:rebuild =~ plugin:* ]]
++ id -un
+ export SSH_USER=root
+ SSH_USER=root
+ sudo -u dokku -E -H /usr/local/bin/dokku ps:rebuild myapp
+ case "$(lsb_release -si)" in
++ lsb_release -si
+ export DOKKU_DISTRO=ubuntu
+ DOKKU_DISTRO=ubuntu
+ export DOKKU_IMAGE=gliderlabs/herokuish
+ DOKKU_IMAGE=gliderlabs/herokuish
+ export DOKKU_LIB_ROOT=/var/lib/dokku
+ DOKKU_LIB_ROOT=/var/lib/dokku
+ export PLUGIN_PATH=/var/lib/dokku/plugins
+ PLUGIN_PATH=/var/lib/dokku/plugins
+ export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ export DOKKU_API_VERSION=1
+ DOKKU_API_VERSION=1
+ export DOKKU_NOT_IMPLEMENTED_EXIT=10
+ DOKKU_NOT_IMPLEMENTED_EXIT=10
+ export DOKKU_VALID_EXIT=0
+ DOKKU_VALID_EXIT=0
+ export DOKKU_LOGS_DIR=/var/log/dokku
+ DOKKU_LOGS_DIR=/var/log/dokku
+ export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ export DOKKU_CONTAINER_LABEL=dokku
+ DOKKU_CONTAINER_LABEL=dokku
+ export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ parse_args ps:rebuild myapp
+ declare 'desc=top-level cli arg parser'
+ local next_index=1
+ local skip=false
+ args=("$@")
+ local args
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=2
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=3
+ return 0
+ args=("$@")
+ [[ ps:rebuild =~ ^--.* ]]
+ has_tty
+ declare 'desc=return 0 if we have a tty'
++ /usr/bin/tty
+ [[ /dev/pts/0 == \n\o\t\ \a\ \t\t\y ]]
+ return 0
++ id -un
+ [[ dokku != \d\o\k\k\u ]]
++ id -un
+ [[ dokku != \r\o\o\t ]]
+ [[ ps:rebuild =~ ^plugin:.* ]]
+ [[ -n '' ]]
+ dokku_auth ps:rebuild myapp
+ declare 'desc=calls user-auth plugin trigger'
+ export SSH_USER=root
+ SSH_USER=root
+ export SSH_NAME=default
+ SSH_NAME=default
+ plugn trigger user-auth root default ps:rebuild myapp
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ [[ ! -n '' ]]
+ return 0
+ case "$1" in
+ execute_dokku_cmd ps:rebuild myapp
+ declare 'desc=executes dokku sub-commands'
+ local PLUGIN_NAME=ps:rebuild
+ local PLUGIN_CMD=ps:rebuild
+ local implemented=0
+ local script
+ argv=("$@")
+ local argv
+ case "$PLUGIN_NAME" in
++ readlink -f /var/lib/dokku/plugins/enabled/ps
+ [[ /var/lib/dokku/core-plugins/available/ps == *core-plugins* ]]
+ [[ ps:rebuild == \p\s\:\r\e\b\u\i\l\d ]]
+ shift 1
+ [[ ! -z '' ]]
+ set -- ps:rebuild myapp
+ [[ -x /var/lib/dokku/plugins/enabled/ps:rebuild/subcommands/default ]]
+ [[ -x /var/lib/dokku/plugins/enabled/ps:rebuild/subcommands/ps:rebuild ]]
+ [[ -x /var/lib/dokku/plugins/enabled/ps/subcommands/rebuild ]]
+ /var/lib/dokku/plugins/enabled/ps/subcommands/rebuild ps:rebuild myapp
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ source /var/lib/dokku/plugins/available/ps/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
++ source /var/lib/dokku/core-plugins/available/common/functions
+++ set -eo pipefail
+++ [[ -n 1 ]]
+++ set -x
++ source /var/lib/dokku/plugins/available/config/functions
+++ set -eo pipefail
+++ [[ -n 1 ]]
+++ set -x
+++ source /var/lib/dokku/core-plugins/available/common/functions
++++ set -eo pipefail
++++ [[ -n 1 ]]
++++ set -x
+ ps_rebuild_cmd ps:rebuild myapp
+ declare 'desc=rebuilds app via command line'
+ local cmd=ps:rebuild
+ [[ -z myapp ]]
+ ps_rebuild myapp
+ declare 'desc=rebuilds app from base image'
+ local APP=myapp
+ verify_app_name myapp
+ declare 'desc=verify app name format and app existence'
+ local APP=myapp
+ [[ ! -n myapp ]]
+ [[ ! myapp =~ ^[a-z].* ]]
+ [[ ! -d /home/dokku/myapp ]]
+ return 0
+ plugn trigger receive-app myapp
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ [[ ! -n '' ]]
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ git_receive_app myapp
+ declare 'desc=git receive-app plugin trigger'
+ local trigger=git_receive_app
+ local APP=myapp
+ local REV=
+ [[ ! -d /home/dokku/myapp/refs ]]
+ dokku git-build myapp
+ case "$(lsb_release -si)" in
++ lsb_release -si
+ export DOKKU_DISTRO=ubuntu
+ DOKKU_DISTRO=ubuntu
+ export DOKKU_IMAGE=gliderlabs/herokuish
+ DOKKU_IMAGE=gliderlabs/herokuish
+ export DOKKU_LIB_ROOT=/var/lib/dokku
+ DOKKU_LIB_ROOT=/var/lib/dokku
+ export PLUGIN_PATH=/var/lib/dokku/plugins
+ PLUGIN_PATH=/var/lib/dokku/plugins
+ export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ export DOKKU_API_VERSION=1
+ DOKKU_API_VERSION=1
+ export DOKKU_NOT_IMPLEMENTED_EXIT=10
+ DOKKU_NOT_IMPLEMENTED_EXIT=10
+ export DOKKU_VALID_EXIT=0
+ DOKKU_VALID_EXIT=0
+ export DOKKU_LOGS_DIR=/var/log/dokku
+ DOKKU_LOGS_DIR=/var/log/dokku
+ export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ export DOKKU_CONTAINER_LABEL=dokku
+ DOKKU_CONTAINER_LABEL=dokku
+ export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ parse_args git-build myapp
+ declare 'desc=top-level cli arg parser'
+ local next_index=1
+ local skip=false
+ args=("$@")
+ local args
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=2
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=3
+ return 0
+ args=("$@")
+ [[ git-build =~ ^--.* ]]
+ has_tty
+ declare 'desc=return 0 if we have a tty'
++ /usr/bin/tty
+ [[ /dev/pts/0 == \n\o\t\ \a\ \t\t\y ]]
+ return 0
++ id -un
+ [[ dokku != \d\o\k\k\u ]]
++ id -un
+ [[ dokku != \r\o\o\t ]]
+ [[ git-build =~ ^plugin:.* ]]
+ [[ -n '' ]]
+ dokku_auth git-build myapp
+ declare 'desc=calls user-auth plugin trigger'
+ export SSH_USER=root
+ SSH_USER=root
+ export SSH_NAME=default
+ SSH_NAME=default
+ plugn trigger user-auth root default git-build myapp
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ [[ ! -n '' ]]
+ return 0
+ case "$1" in
+ execute_dokku_cmd git-build myapp
+ declare 'desc=executes dokku sub-commands'
+ local PLUGIN_NAME=git-build
+ local PLUGIN_CMD=git-build
+ local implemented=0
+ local script
+ argv=("$@")
+ local argv
+ case "$PLUGIN_NAME" in
++ readlink -f /var/lib/dokku/plugins/enabled/git-build
+ [[ /var/lib/dokku/plugins/enabled/git-build == *core-plugins* ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-build/subcommands/default ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-build/subcommands/git-build ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-build/subcommands/git-build ]]
+ [[ 0 -eq 0 ]]
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/00_dokku-standard/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/20_events/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/apps/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/certs/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/checks/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/config/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/docker-options/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/domains/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/enter/commands git-build myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/git/commands git-build myapp
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ source /var/lib/dokku/plugins/available/apps/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
++ source /var/lib/dokku/core-plugins/available/common/functions
+++ set -eo pipefail
+++ [[ -n 1 ]]
+++ set -x
++ source /var/lib/dokku/plugins/available/config/functions
+++ set -eo pipefail
+++ [[ -n 1 ]]
+++ set -x
+++ source /var/lib/dokku/core-plugins/available/common/functions
++++ set -eo pipefail
++++ [[ -n 1 ]]
++++ set -x
+ source /var/lib/dokku/plugins/available/config/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
++ source /var/lib/dokku/core-plugins/available/common/functions
+++ set -eo pipefail
+++ [[ -n 1 ]]
+++ set -x
+ case "$1" in
+ git_build_cmd git-build myapp
+ declare 'desc=lock git-build'
+ local cmd=git-build
+ local APP=myapp
+ local APP_BUILD_LOCK=/home/dokku/myapp/.build.lock
+ local 'APP_BUILD_LOCK_MSG=myapp is currently being deployed or locked. Waiting...'
++ flock -n /home/dokku/myapp/.build.lock true
++ echo 0
+ [[ 0 -ne 0 ]]
+ shift 1
+ flock -o /home/dokku/myapp/.build.lock dokku git-build-locked myapp
+ case "$(lsb_release -si)" in
++ lsb_release -si
+ export DOKKU_DISTRO=ubuntu
+ DOKKU_DISTRO=ubuntu
+ export DOKKU_IMAGE=gliderlabs/herokuish
+ DOKKU_IMAGE=gliderlabs/herokuish
+ export DOKKU_LIB_ROOT=/var/lib/dokku
+ DOKKU_LIB_ROOT=/var/lib/dokku
+ export PLUGIN_PATH=/var/lib/dokku/plugins
+ PLUGIN_PATH=/var/lib/dokku/plugins
+ export PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ PLUGIN_AVAILABLE_PATH=/var/lib/dokku/plugins/available
+ export PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ PLUGIN_ENABLED_PATH=/var/lib/dokku/plugins/enabled
+ export PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ PLUGIN_CORE_PATH=/var/lib/dokku/core-plugins
+ export PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ PLUGIN_CORE_AVAILABLE_PATH=/var/lib/dokku/core-plugins/available
+ export PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ PLUGIN_CORE_ENABLED_PATH=/var/lib/dokku/core-plugins/enabled
+ export DOKKU_API_VERSION=1
+ DOKKU_API_VERSION=1
+ export DOKKU_NOT_IMPLEMENTED_EXIT=10
+ DOKKU_NOT_IMPLEMENTED_EXIT=10
+ export DOKKU_VALID_EXIT=0
+ DOKKU_VALID_EXIT=0
+ export DOKKU_LOGS_DIR=/var/log/dokku
+ DOKKU_LOGS_DIR=/var/log/dokku
+ export DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ DOKKU_EVENTS_LOGFILE=/var/log/dokku/events.log
+ export DOKKU_CONTAINER_LABEL=dokku
+ DOKKU_CONTAINER_LABEL=dokku
+ export DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ DOKKU_GLOBAL_RUN_ARGS=--label=dokku
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ parse_args git-build-locked myapp
+ declare 'desc=top-level cli arg parser'
+ local next_index=1
+ local skip=false
+ args=("$@")
+ local args
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=2
+ for arg in '"$@"'
+ false
+ case "$arg" in
+ local next_index=3
+ return 0
+ args=("$@")
+ [[ git-build-locked =~ ^--.* ]]
+ has_tty
+ declare 'desc=return 0 if we have a tty'
++ /usr/bin/tty
+ [[ /dev/pts/0 == \n\o\t\ \a\ \t\t\y ]]
+ return 0
++ id -un
+ [[ dokku != \d\o\k\k\u ]]
++ id -un
+ [[ dokku != \r\o\o\t ]]
+ [[ git-build-locked =~ ^plugin:.* ]]
+ [[ -n '' ]]
+ dokku_auth git-build-locked myapp
+ declare 'desc=calls user-auth plugin trigger'
+ export SSH_USER=root
+ SSH_USER=root
+ export SSH_NAME=default
+ SSH_NAME=default
+ plugn trigger user-auth root default git-build-locked myapp
+ source /var/lib/dokku/core-plugins/available/common/functions
++ set -eo pipefail
++ [[ -n 1 ]]
++ set -x
+ [[ ! -n '' ]]
+ return 0
+ case "$1" in
+ execute_dokku_cmd git-build-locked myapp
+ declare 'desc=executes dokku sub-commands'
+ local PLUGIN_NAME=git-build-locked
+ local PLUGIN_CMD=git-build-locked
+ local implemented=0
+ local script
+ argv=("$@")
+ local argv
+ case "$PLUGIN_NAME" in
++ readlink -f /var/lib/dokku/plugins/enabled/git-build-locked
+ [[ /var/lib/dokku/plugins/enabled/git-build-locked == *core-plugins* ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-build-locked/subcommands/default ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-build-locked/subcommands/git-build-locked ]]
+ [[ -x /var/lib/dokku/plugins/enabled/git-build-locked/subcommands/git-build-locked ]]
+ [[ 0 -eq 0 ]]
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/00_dokku-standard/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/20_events/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/apps/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/certs/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/checks/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/config/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/docker-options/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/domains/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
+ /var/lib/dokku/plugins/enabled/enter/commands git-build-locked myapp
+ exit_code=10
+ set -e
+ [[ 10 -eq 10 ]]
+ continue
+ for script in '$PLUGIN_ENABLED_PATH/*/commands'
+ set +e
```
Can you remove the redirect to stderr on [this line](https://github.com/dokku/dokku/blob/a9a9d0a898c23a1df9252b6ae1f8fecc2ff1be4e/plugins/git/functions#L57) and try pushing again?
Ok, I get this:
```
remote: + git fetch --depth=1 origin refs/tags/dokku/b4287d41e948d5bd423edd1d5528d86784678d8c
remote: fatal: write
error: No space left on device
```
But I have a different version of that git file you linked to, it is not
called `functions` but `commands`
(`dokku/plugins/enabled/git/commands`).
Okay, so even though I cleaned up there is no space. Hm. Any idea how I can make more space? Most of the space is taken in /var/lib/docker/aufs/diff.
It is a 30 GB server and there is only one app. :)
Here is the disk space info if that helps:
```
Filesystem Size Used Avail Use% Mounted on
udev 487M 4.0K 487M 1% /dev
tmpfs 100M 552K 99M 1% /run
/dev/disk/by-label/DOROOT 30G 27G 1.4G 96% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 497M 2.8M 495M 1% /run/shm
none 100M 0 100M 0% /run/user
overflow 1.0M 0 1.0M 0% /tmp
none 30G 27G 1.4G 96% /var/lib/docker/aufs/mnt/6cf7122027c0fc8f30350803dc83d016398b69bf17e5b69349b8559718d787f6
shm 64M 0 64M 0%
/var/lib/docker/containers/5d0b9c824ea6a7b05c73293f5ac92e8bc6060cd9b69d48aef2a90555a6124a4c/shm
none 30G 27G 1.4G 96%
/var/lib/docker/aufs/mnt/96c252a39a50a42b7f30b641c4c35bf3fe4fc339990a7127d7a7f6a83aad4e75
shm 64M 0 64M 0%
/var/lib/docker/containers/f149b478793abf788bb3c13b89509965e678083db541cc7a762059dcbcdb2737/shm
none 30G 27G 1.4G 96%
/var/lib/docker/aufs/mnt/6a54418da4a675d48752d2bfc679ea3d72b30fd744732450d3208931edfea689
shm 64M 4.0K 64M 1%
/var/lib/docker/containers/0e442500e4008b22e8247296a9a79e67fe4aeac3526dc265fa1fa633e77ef77b/shm
none 30G 27G 1.4G 96%
/var/lib/docker/aufs/mnt/947a39aeaadd4774c45360bb6bd792ca94cbae4570e29147fad213e188137e70
shm 64M 0 64M 0% /var/lib/docker/containers/64af2fc33af08a1e7907d5ff2f419a5187c8c170eace4f6a08a65ac6484d3e53/shm
none 30G 27G 1.4G 96% /var/lib/docker/aufs/mnt/163a716c9ff437e0dc48aea195b5907cb78e820aa4838a65efb7c8849a66e361
shm 64M 0 64M 0% /var/lib/docker/containers/3cb07b2cd8962f3c9e27d032e3bdd033bcfb4e723f943a5cc2193907ba662919/shm
```空orphaned diffs
I'd like to know why docker uses so much disk, even after removing _all_ containers, images, and volumes.
It looks like this "diff" has a layer, but the layer isn't referenced by anything at all.
```
/var/lib/docker/aufs/diff# du-summary
806628 c245c4c6d71ecdd834974e1e679506d33c4aac5f552cb4b28e727a596efc1695-removing
302312 a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
302304 957e78f9f9f4036689734df16dabccb98973e2c3de0863ef3f84de85dca8d92d
302256 8db1d610f3fbc71415f534a5d88318bbd2f3f783375813f2288d15f15846d312
288204 ac6b8ff4c0e7b91230ebf4c1caf16f06c1fdceff6111fd58f4ea50fc2dd5050b
288180 04a478c413ea80bcfa7f6560763beef991696eace2624254479e5e5dd69708c6
287804 d033ab6e4e5231dc46c6c417c680b239bb0e843024738517cbb0397128e166ca
233420 8e21143dca49e30cae7475b71b5aee9b92abe2069fbb9ab98ce9c334e3f6d4fa
212668 a631b94f7a2d5d21a96a78e9574d39cdeebbc81b51ac6c58bd48dc4045656477
205120 ae13341f8c08a925a95e5306ac039b0e0bbf000dda1a60afb3d15c838e43e349
205120 8d42279017d6095bab8d533ab0f1f7de229aa7483370ef53ead71fe5be3f1284
205116 59b3acd8e0cfd194d44313978d4b3769905cdb5204a590069c665423b10150e3
205116 040af0eee742ec9fb2dbeb32446ce44829cd72f02a2cf31283fcd067e73798ab
158024 ef0a29ff0b515c8c57fe78bcbd597243de9f7b274d9b212c774d91bd45a6c9b1
114588 061bd7e021afd4aaffa9fe6a6de491e10d8d37d9cbe7612138f58543e0985280
114576 149e8d2745f6684bc2106218711991449c452d4c7e6203e2a0f46651399162b0
114532 52b28112913abb0ed1b3267a0baa1cacd022ca6611812d0a8a428e61ec399589
114300 52475beba19687a886cba4bdb8508d5aaf051ceb52fb3a65294141ab846c8294
76668 4e6afb958b5ee6dea6d1a886d19fc9c780d4ecc4baeebfbde31f9bb97732d10d
76640 c61340c6a962ddd484512651046a676dbbc6a5d46aecc26995c49fe987bf9cdc
/var/lib/docker/aufs/diff# du -hs a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
296M a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
$ docker-find a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
+ docker=/var/lib/docker
+ sudo find /var/lib/docker '(' -path '/var/lib/docker/aufs/diff/*' -o -path '/var/lib/docker/aufs/mnt/*' ')' -prune -o -print
+ grep a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
/var/lib/docker/aufs/layers/a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
+
sudo find /var/lib/docker '(' -path '/var/lib/docker/aufs/diff/*' -o
-path '/var/lib/docker/aufs/mnt/*' ')' -prune -o -type f -print0
+ sudo xargs -0 -P20 grep -l a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
/var/lib/docker/aufs/layers/993e4988c510ec3ab4f6d139740a059df40585576f8196817e573a9684554c5c
/var/lib/docker/aufs/layers/95e68d59a8704f2bb52cc1306ca910ddb7af8956eb7c57970fcf7d8b3d9baddb
/var/lib/docker/aufs/layers/4e6afb958b5ee6dea6d1a886d19fc9c780d4ecc4baeebfbde31f9bb97732d10d
/var/lib/docker/aufs/layers/fd895b6f56aedf09c48dba97931a34cea863a21175450c31b6ceadde03f7b3da
/var/lib/docker/aufs/layers/ac6b8ff4c0e7b91230ebf4c1caf16f06c1fdceff6111fd58f4ea50fc2dd5050b
/var/lib/docker/aufs/layers/f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579-init
/var/lib/docker/aufs/layers/d5bbef5adf2efb6f15d4f96c4bee21beb955255d1ec17baf35de66e98e6c7328
/var/lib/docker/aufs/layers/9646360df378b88eae6f1d6288439eebd9647d5b9e8a471840d4a9d6ed5d92a4
/var/lib/docker/aufs/layers/cf9fd1c4a64baa39b6d6d9dac048ad2fff3c3fe13924b07377e767eed230ba9f
/var/lib/docker/aufs/layers/f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579
/var/lib/docker/aufs/layers/23ce5a473b101d85f0e9465debe5a0f3b8a2079b99528a797b02052d06bc11d8
/var/lib/docker/image/aufs/layerdb/sha256/d1c659b8e3d0e893e95c8eedc755adcb91a1c2022e1090376b451f7206f9b1c0/cache-id
$ sudo cat /var/lib/docker/image/aufs/layerdb/sha256/d1c659b8e3d0e893e95c8eedc755adcb91a1c2022e1090376b451f7206f9b1c0/diff
sha256:b5185949ba02a6e065079660b0536672c9691fb0e0cb1fd912b2c7b29c91d625
$ docker-find sha256:b5185949ba02a6e065079660b0536672c9691fb0e0cb1fd912b2c7b29c91d625
+ docker=/var/lib/docker
+ sudo find /var/lib/docker '(' -path '/var/lib/docker/aufs/diff/*' -o -path '/var/lib/docker/aufs/mnt/*' ')' -prune -o -print
+ grep sha256:b5185949ba02a6e065079660b0536672c9691fb0e0cb1fd912b2c7b29c91d625
+ sudo find /var/lib/docker '(' -path '/var/lib/docker/aufs/diff/*' -o -path '/var/lib/docker/aufs/mnt/*' ')' -prune -o -type f -print0
+ sudo xargs -0 -P20 grep -l sha256:b5185949ba02a6e065079660b0536672c9691fb0e0cb1fd912b2c7b29c91d625
/var/lib/docker/image/aufs/layerdb/sha256/d1c659b8e3d0e893e95c8eedc755adcb91a1c2022e1090376b451f7206f9b1c0/diff
```
```
# docker --version
Docker version 1.10.3, build 99b71ce
# docker info
Containers: 3
Running: 0
Paused: 0
Stopped: 3
Images: 29
Server Version: 1.10.3
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 99
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.13.0-83-generic
Operating System: |unknown|
OSType: linux
Architecture: x86_64
CPUs: 24
Total Memory: 125.9 GiB
Name: dev34-devc
ID: VKMX:YMJ2:3NGV:5J6I:5RYM:AVBK:QPOZ:ODYE:VQ2D:AF2J:2LEM:TKTE
WARNING: No swap limit support
```
I should also show that docker lists no containers, volumes, or images:
```
$ docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker volume ls
DRIVER VOLUME NAME
```
strange; especially because of;
```
Containers: 3
Running: 0
Paused: 0
Stopped: 3
Images: 29
```
which doesn't match the output of `docker images` / `docker ps`.
What operating system are you running on?
```
Operating System: |unknown|
```
@tonistiigi any idea?
That was afterward. I guess some processes kicked off in the meantime.
The state I'm referring to (I have now) is:
```
$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
```
And I still have:
```
$ sudo du -hs /var/lib/docker/aufs/diff/a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
296M /var/lib/docker/aufs/diff/a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
```
We're on Ubuntu Lucid with an upgraded kernel =/
```
$ uname -a
Linux dev34-devc 3.13.0-83-generic #127-Ubuntu SMP Fri Mar 11 00:25:37 UTC 2016 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 10.04.1 LTS
Release: 10.04
Codename: lucid
```
It seems an interesting issue.
is it possible to have way to reproduce it ? @bukzor
Surely it's possible, but I don't know how.
Please try running the below script on one of your active docker hosts and see what's left.
In our case, there's always plenty of diffs left behind.
``` #!bash
#!/bin/bash
set -eu
echo "WARNING:: This will stop ALL docker processes and remove ALL docker images."
read -p "Continue (y/n)? "
if [ "$REPLY" != "y" ]; then
echo "Aborting."
exit 1
fi
xdocker() { exec xargs -P10 -r -n1 --verbose docker "$@"; }
set -x
# remove containers
docker ps -q | xdocker stop
docker ps -aq | xdocker rm
# remove tags
docker images | sed 1d | grep -v '^|none|' | col 1 2 | sed 's/ /:/' | xdocker rmi
# remove images
docker images -q | xdocker rmi
docker images -aq | xdocker rmi
# remove volumes
docker volume ls -q | xdocker volume rm
```
One possible way I see this happening
is that if there are errors on aufs unmounting. For example, if there
are EBUSY errors then probably the image configuration has already been
deleted before.
@bukzor
Would be very interesting if there was a reproducer that would start
from an empty graph directory, pull/run images and get it into a state where it doesn't fully clean up after running your script.
That would be interesting, but sounds like a full day's work.
I can't commit to that.
Here's some more data regarding the (arbitrarily selected) troublesome diff above, `a800`.
``` #!sh
$ docker-find a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea | sudo xargs -n1 wc -l | sort -rn
+ sudo find /nail/var/lib/docker '(' -path '/nail/var/lib/docker/aufs/diff/*' -o -path '/nail/var/lib/docker/aufs/mnt/*' ')' -prune -o -print
+ grep a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
+ sudo find /nail/var/lib/docker '(' -path '/nail/var/lib/docker/aufs/diff/*' -o -path '/nail/var/lib/docker/aufs/mnt/*' ')' -prune -o -type f -print0
+ sudo xargs -0 -P20 grep -l a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
15 /nail/var/lib/docker/aufs/layers/f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579
14 /nail/var/lib/docker/aufs/layers/f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579-init
13 /nail/var/lib/docker/aufs/layers/993e4988c510ec3ab4f6d139740a059df40585576f8196817e573a9684554c5c
12 /nail/var/lib/docker/aufs/layers/cf9fd1c4a64baa39b6d6d9dac048ad2fff3c3fe13924b07377e767eed230ba9f
11 /nail/var/lib/docker/aufs/layers/4e6afb958b5ee6dea6d1a886d19fc9c780d4ecc4baeebfbde31f9bb97732d10d
10
/nail/var/lib/docker/aufs/layers/23ce5a473b101d85f0e9465debe5a0f3b8a2079b99528a797b02052d06bc11d8
9
/nail/var/lib/docker/aufs/layers/95e68d59a8704f2bb52cc1306ca910ddb7af8956eb7c57970fcf7d8b3d9baddb
8
/nail/var/lib/docker/aufs/layers/ac6b8ff4c0e7b91230ebf4c1caf16f06c1fdceff6111fd58f4ea50fc2dd5050b
7
/nail/var/lib/docker/aufs/layers/fd895b6f56aedf09c48dba97931a34cea863a21175450c31b6ceadde03f7b3da
6
/nail/var/lib/docker/aufs/layers/d5bbef5adf2efb6f15d4f96c4bee21beb955255d1ec17baf35de66e98e6c7328
5
/nail/var/lib/docker/aufs/layers/9646360df378b88eae6f1d6288439eebd9647d5b9e8a471840d4a9d6ed5d92a4
4
/nail/var/lib/docker/aufs/layers/a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
0
/nail/var/lib/docker/image/aufs/layerdb/sha256/d1c659b8e3d0e893e95c8eedc755adcb91a1c2022e1090376b451f7206f9b1c0/cache-id
```
So
we see there's a chain of child layers, with `f3286009193` as the tip.
```
$ docker-find
f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579'$'
+ sudo find /nail/var/lib/docker '(' -path
'/nail/var/lib/docker/aufs/diff/*' -o -path
'/nail/var/lib/docker/aufs/mnt/*' ')' -prune -o -print
+ grep --color
'f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579$'
/nail/var/lib/docker/aufs/layers/f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579
+
sudo find /nail/var/lib/docker '(' -path
'/nail/var/lib/docker/aufs/diff/*' -o -path
'/nail/var/lib/docker/aufs/mnt/*' ')' -prune -o -type f -print0
+ sudo xargs -0 -P20 grep --color -l 'f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579$'
/nail/var/lib/docker/image/aufs/layerdb/mounts/eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e/mount-id
```
So that layer was used in mount `eb809c0321`. I don't find any references to that mount anywhere:
```
$ docker-find eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e
+ sudo find /nail/var/lib/docker '(' -path '/nail/var/lib/docker/aufs/diff/*' -o -path '/nail/var/lib/docker/aufs/mnt/*' ')' -prune -o -print
+ grep --color eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e
/nail/var/lib/docker/image/aufs/layerdb/mounts/eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e
/nail/var/lib/docker/image/aufs/layerdb/mounts/eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e/mount-id
/nail/var/lib/docker/image/aufs/layerdb/mounts/eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e/init-id
/nail/var/lib/docker/image/aufs/layerdb/mounts/eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e/parent
+ sudo find /nail/var/lib/docker '(' -path '/nail/var/lib/docker/aufs/diff/*' -o -path '/nail/var/lib/docker/aufs/mnt/*' ')' -prune -o -type f -print0
+ sudo xargs -0 -P20 grep --color -l eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e
```
Is there any way to find what container that mount was used for?
The doc only says the mount ID is no longer equal to the container ID, which isn't very helpful.
https://docs.docker.com/engine/userguide/storagedriver/aufs-driver/
@bukzor `eb809c0321` is the container id. What docs mean is that aufs id (`f3286009193f` in your case) is not container id.
/cc @dmcgowan as well
@tonistiigi OK.
Then obviously the mount has outlived its container.
At what point in the container lifecycle is the mount cleaned up?
Is this the temporary writable aufs for running/stopped containers?
@bukzor (rw) mount is deleted on container deletion. Unmount happens
on container process stop. Diff folders are a place where the
individual layer contents are stored, it doesn't matter is the layer is mounted
or not.
@bukzor The link between the aufs id and container id can be found at
`image/aufs/layerdb/mounts/|container-id|/mount-id`. From just knowing
an aufs id the easiest way to find the container id is to grep the `image/aufs/layerdb` directory for it. If nothing is found, then the cleanup was not completed cleanly.
Running into similar issue.
We're running daily CI in the docker daemon server. /var/lib/docker/aufs/diff takes quite a mount of disk capacity, which it shouldn't be.
Still `2gb` in `aufs/diff` after trying everything reasonable suggested here or in related threads (including @bukzor's bash script above).
Short of a proper fix, is there any straightforward way of removing the
leftover mounts without removing all other images at the same time? (If
no containers are running currently, I guess there should be no mounts, right?)
I am experiencing the same issue. I am using this machine to test a lot of containers, then commit/delete. My /var/lib/docker/aufs directory
is currently 7.9G heavy. I'm going to have to move this directory to
another mount point, because storage on this one is limited. :(
```
# du -sh /var/lib/docker/aufs/diff/
1.9T
/var/lib/docker/aufs/diff/
```
@mcallaway Everything in `aufs/diff` is going to be fs writes performed
in a container.
I have the same issue. All containers which I have are in running state,
but there are lots of aufs diff directories which don't relate to these
containers and relate to old removed containers. I can remove them
manually, but it is not an option. There should be a reason for such a
behavior.
I use k8s 1.3.5 and docker 1.12.
Running of the `docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v /etc:/etc spotify/docker-gc` helped.
I have the same issue. I'm using Gitlab CI with dind (docker in docker).
IMHO when image in the registry was updated
within the same tag and it was pulled, then related container was
restarted, old container and image are not GCed unless you run
`spotify/docker-gc`.
Can
someone else confirm this?
@kayrus correct, docker will not automatically assume that an "untagged"
image should also be _removed_. Containers could still be using that
image, and you can still start new containers from that image
(referencing it by its ID). You can remove "dangling" images using
`docker rmi $(docker images -qa -f dangling=true)`. Also, docker 1.13
will get data management commands (see
https://github.com/docker/docker/pull/26108), which allow you to more
easily cleanup unused images, containers, etc.
@thaJeztah does `/var/lib/docker/aufs/diff/` actually contain the "untagged" images?
@kayrus yes they are part of the images (tagged, and untagged)
getting a similar issue, no containers/images/volumes, ~13Gb of diffs
```
$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.12.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 1030
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor
Kernel Version: 3.13.0-32-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.861 GiB
Name: gitrunner
ID: GSAW:6X5Z:SHHU:NZIM:O76D:P5OE:7OZG:UFGQ:BOAJ:HJFM:5G6W:5APP
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
```
```
$ docker volume ls
DRIVER VOLUME NAME
$ docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$
```
```
$ df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/mapper/gitrunner--docker-lib--docker 18G 15G 2.6G 85% /var/lib/docker
```
```
/var/lib/docker# sudo du -sm aufs/*
13782 aufs/diff
5 aufs/layers
5 aufs/mnt
```
``` shell
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 1.12.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: xfs
Dirs: 1122
```
Same issue here. I understand 1.13 may get data management commands but
in the meantime, I just want to safely delete the contents of this
directory without killing Docker.
This is relatively blocking at this point.
Same here. Still no official solution?
I've brought this up a few different times in (Docker Community) Slack.
Each time a handful of people run through a list of garbage collection
scripts/cmds I should run as a solution.
While those have helped (read: not solved - space is still creeping
towards full) in the interim, I think we can all agree that's not the ideal long term fix.
@jadametz 1.13 has `docker system prune`.
Beyond that, I'm not sure how else Docker can help (open to suggestion). The images aren't just getting to the system on their own, but rather through pulls, builds, etc.
In terms of actual orphaned layers (no images on the system referencing them), we'll need to address that separately.
I have exactly the same issue!
```docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.12.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 2501
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor
Kernel Version: 3.13.0-96-generic
Operating System: Ubuntu 14.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 14.69 GiB
Name: ip-172-31-45-4
ID: R5WV:BXU5:AV6T:GZUK:SAEA:6E74:PRSO:NQOH:EPMQ:W6UT:5DU4:LE64
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
```
No images, containers or volumes. 42Gb in aufs/diff
Anything to help clear this directory safely would be very useful! Tried everything in this thread without any success. Thanks.
@adamdry only third-party script: https://github.com/docker/docker/issues/22207#issuecomment-252560212
Thanks @kayrus I did indeed try that and it increased my total disk usage slightly and didn't appear to do anything to the aufs/diff directory.
I also tried `docker system prune` which didn't run. And I tried `docker rmi $(docker images -qa -f dangling=true)` which didn't find any images to remove.
For anyone interested I'm now using this to clean down all containers, images, volumes and old aufs:
`### FYI I am a Docker noob so I don't know if this causes any underlying issues but it does work for me - use at your own risk ###`
Lots of inspiration taken from here: http://stackoverflow.com/questions/30984569/error-error-creating-aufs-mount-to-when-building-dockerfile
```
docker rm -f $(docker ps -a -q) || docker rmi -f $(docker images -q) || docker rmi -f $(docker images -a -q)
service docker stop
rm -rf /var/lib/docker/aufs
rm -rf /var/lib/docker/image/aufs
rm -f /var/lib/docker/linkgraph.db
service docker start
```
@adamdry Best to not use `-f` when doing rm/rmi as it will hide errors in removal.
I do consider the current situation... where `-f` hides an error and then we are left with some left-over state that is completely invisible to the user... as a bug.
I'm also seeing this on a completely new and unsurprising installation:
```
root@builder:/var/lib/docker# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.12.4
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 63
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: overlay host null bridge
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options:
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 3.625 GiB
Name: builder
ID: 2WXZ:BT74:G2FH:W7XD:VVXM:74YS:EA3A:ZQUK:LPID:WYKF:HDWC:UKMJ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
Insecure Registries:
127.0.0.0/8
root@builder:/var/lib/docker# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@builder:/var/lib/docker# docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
root@builder:/var/lib/docker# du -hd2
4.0K ./swarm
6.0M ./image/aufs
6.0M ./image
4.0K ./trust
28K ./volumes
4.0K ./containers
276K ./aufs/layers
292K ./aufs/mnt
1.5G ./aufs/diff |-------------------------
1.5G ./aufs
4.0K ./tmp
72K ./network/files
76K ./network
1.5G .
root@builder:/var/lib/docker#
```
@robhaswell Seeing as it's a new
install, do you want to try this?
https://github.com/docker/docker/issues/22207#issuecomment-266784433
@adamdry I've already deleted `/var/lib/docker/aufs` as it was blocking
my work. What do you expect your instructions to achieve? If they stop
the problem from happening again in the future, then I can try to
recreate the issue and try your instructions. However if the purpose is
just to free up the space then I've already achieved that.
@robhaswell Yeah it was to free up disk space but I had follow up issues
when trying to rebuild my images but by following all the steps in that
script it resolved those issues.
During build, if the build process is interrupted during layer build
process (which also contains a blob to be copied), followed by stopping
the container, it leaves behind data in /var/lib/docker/aufs/diff/. It
showed up a dangling image. Cleaning it up too didn't release the space.
Is it possible to include it as a part of docker system prune? Only
deleting the blob data inside this folder frees up the space, which I am
not sure will cause any issue or not.
Docker version : 1.13.0-rc1
| During build, if the build process is interrupted during layer build process (which also contains
a blob to be copied), followed by stopping the container, it leaves
behind data
This could also be the cause of my problems - I interrupt a lot of
builds.
During docker pull, observed the following two cases:
1. if the process is interrupted when it says download (which downloads
the image layer in /var/lib/docker/tmp/) , it cleans up all the data in
that folder
2. If the process is interrupted when it says extracting(which i suppose
is extracting the layer from tmp to /var/lib/docker/aufs/diff/), it
cleans up the tmp and diff blob data both.
During image build process,
1. On interrupting the process when "Sending build context to docker
daemon" ( which copies blob data in my case in /var/lib/docker/tmp/),
it remains there forever and cannot be cleaned by any command except
manually deleting it. I am not sure how the apt get updates in image are handled.
2. While the layer is being built
which contains a blob data, say a large software setup, if the process
is interrupted, the docker container keeps working on the image. In my
case only 1 layer blob data, which is already available in tmp folder,
makes up the whole image. But, if the container is stopped using the
docker stop command, two cases happen:
a. if the mount process is still happening, it will leave behind data in
tmp and diff folder.
b. If the data copied in diff folder, it will remove the data from tmp
folder and leave data in diff folder and maybe mount folder.
We have an automated build process, which needs a control to stop any build process gracefully. Recently, a process got killed by kernel due to out of memory error on a machine which was
of low configuration.
If one image is to be built of 2 layers, 1 layer is built and 2nd is
interrupted, Docker system prune seems to clean up the data for the
container of layer which was interrupted and container stopped. But it doesn't clean up the data of the previous layers in case of interrupt.
Also, it didn't reflect the total disk space claimed. Ran these tests
on AWS, ubuntu 14.04, x86_64 bit system with aufs filesystem. Ran the
docker prune test with docker 1.13.0 rc3 and docker 1.12
@thaJeztah
Please let me know if i am misinterpreting anything.
I opened an issue for the `/var/lib/docker/tmp` files not being cleaned up; https://github.com/docker/docker/issues/29486
| Docker system prune seems to clean up the data for the container
of layer which was interrupted and container stopped. But it doesn't
clean up the data of the previous layers in case of interrupt.
I tried to reproduce that situation, but wasn't able to see that with a simple case;
Start with a clean install empty `/var/lib/docker`, create a big file for
testing, and a Dockerfile;
```bash
mkdir repro || cd repro
fallocate -l 300M bigfile
```
```Dockerfile
cat | Dockerfile ||EOF
FROM scratch
COPY ./bigfile /
COPY ./bigfile /again/
COPY ./bigfile /and-again/
EOF
```
start `docker build`, and cancel while building, but _after_ the build
context has been sent;
```bash
docker build -t stopme .
Sending build context to Docker daemon 314.6 MB
Step 1/4 : FROM scratch
---|
Step 2/4 : COPY ./bigfile /
---| 28eb6d7b0920
Removing intermediate container 98876b1673bf
Step 3/4 : COPY ./bigfile /again/
^C
```
check content of `/var/lib/docker/aufs/`
```bash
du -h /var/lib/docker/aufs/
301M /var/lib/docker/aufs/diff/9127644c356579741348f7f11f50c50c9a40e0120682782dab55614189e82917
301M /var/lib/docker/aufs/diff/81fd6b2c0cf9a28026cf8982331016a6cd62b7df5a3cf99182e7e09fe0d2f084/again
301M /var/lib/docker/aufs/diff/81fd6b2c0cf9a28026cf8982331016a6cd62b7df5a3cf99182e7e09fe0d2f084
601M /var/lib/docker/aufs/diff
8.0K /var/lib/docker/aufs/layers
4.0K /var/lib/docker/aufs/mnt/9127644c356579741348f7f11f50c50c9a40e0120682782dab55614189e82917
4.0K /var/lib/docker/aufs/mnt/81fd6b2c0cf9a28026cf8982331016a6cd62b7df5a3cf99182e7e09fe0d2f084
4.0K /var/lib/docker/aufs/mnt/b6ffb1d5ece015ed4d3cf847cdc50121c70dc1311e42a8f76ae8e35fa5250ad3-init
16K /var/lib/docker/aufs/mnt
601M /var/lib/docker/aufs/
```
run the `docker system prune` command to clean up images, containers;
```
docker system prune -a
WARNING! This will remove:
- all stopped containers
- all volumes not used by at least one container
- all networks not used by at least one container
- all images without at least one container associated to them
Are you sure you want to continue? [y/N] y
Deleted Images:
deleted: sha256:253b2968c0b9daaa81a58f2a04e4bc37f1dbf958e565a42094b92e3a02c7b115
deleted: sha256:cad1de5fd349865ae10bfaa820bea3a9a9f000482571a987c8b2b69d7aa1c997
deleted: sha256:28eb6d7b09201d58c8a0e2b861712701cf522f4844cf80e61b4aa4478118c5ab
deleted: sha256:3cda5a28d6953622d6a363bfaa3b6dbda57b789e745c90e039d9fc8a729740db
Total reclaimed space: 629.1 MB
```
check content of `/var/lib/docker/aufs/`
```bash
du -h /var/lib/docker/aufs/
4.0K /var/lib/docker/aufs/diff
4.0K /var/lib/docker/aufs/layers
4.0K /var/lib/docker/aufs/mnt/b6ffb1d5ece015ed4d3cf847cdc50121c70dc1311e42a8f76ae8e35fa5250ad3-init
8.0K /var/lib/docker/aufs/mnt
20K /var/lib/docker/aufs/
```
I do see that the `-init` mount is left behind, I'll check if we can solve
that (although it's just an empty directory)
The only difference in the dockerfile I had used was ( to create different layers)
FROM scratch
COPY ["./bigfile", "randomNoFile1", /]
COPY ["./bigfile", "randomNoFile2", /]
EOF
I am not sure if it makes a difference.
No, the problem isn't about the empty init folders. In my case, it was te blob. However, i can recheck it on monday and update.
Also, was using a 5GB file, created it by reading bytes from dev urandom.
In your case, the same file is added 2 times. Would that create single
layer and mount the 2nd layer from it or would it be 2 separate layers?
In my case, its always 2
separate layers.
@thaJeztah
Thank you for such a quick response on the issue. Addition of this
feature would be of great help!
@monikakatiyar16 I tried to reproduce this as well with canceling the
build multiple times during both `ADD` and `RUN` commands but couldn't
get anything to leak to `aufs/diff` after deletion. I couldn't quite
understand what container you are stopping because containers should not
be running during `ADD/COPY` operations. If you can put together a
reproducer that we could run that would be greatly appreciated.
Its possible that I could be doing something wrong. Since I am travelling on weekend, I will reproduce it and update all the required info here on Monday.
@tonistiigi @thaJeztah
I feel
you are right. There are actually no containers that are listed as
active and running. Instead there are dead containers. Docker system
prune didn't work in my case, might be since the process didn't get
killed with Ctrl+C. Instead, it kept running at the background. In my
case, that would be the reason, it couldn't remove those blobs.
When I interrupt the process using
Ctrl+C, the build process gets killed, but a process for docker-untar
remains alive in the background, which keeps working on building the
image. (Note: /var/lib/docker is soft linked to /home/lib/docker to use
the EBS volumes for large data on AWS)
`root 12700 10781 7 11:43 ? 00:00:04 docker-untar /home/lib/docker/aufs/mnt/d446d4f8a7dbae162e7578af0d33ac38a63b4892905aa86a8d131c1e75e2828c`
I have attached the script I had been using for creating large files and building the image (gc_maxpush_pull.sh)
Also attached the
behaviour of build process for building an image-interrupting it with
Ctrl+C (DockerBuild_WOProcessKill) and building image -interrupting it
with Ctrl+C - killing the process (DockerBuild_WithProcessKill)
Using the commands -
To create large file : `./gc_maxpush_pull.sh 1 5gblayer 0 512 1`
To build images : `./gc_maxpush_pull.sh 1 5gblayer 1 512 1`
[DockerBuild.zip](https://github.com/docker/docker/files/660942/DockerBuild.zip)
Steps to replicate :
1. Create a large file of 5GB
2. Start the build process and interrupt it only after Sending build context is over and
it's actually copying the blob.
3. It completes building the image after a while and shows it up in
docker images (as in case 1 attached by me -
DockerBuild_WOProcessKill)
4. If the process is killed, it takes a while and leaves the blob data in /diff (which it should on killing process abruptly as attached in file - DockerBuild_WithProcessKill)
If what I am assuming is correct, then this might not be an issue with docker prune, instead with killing of docker build that is somehow not working for me.
Is there a graceful way of interrupting or stopping the build image process, which also takes care
of cleaning up the copied data (as handled in docker pull)?
Previously, I was not killing the process. I am also curious what
docker-untar does and why it mounts it to /mnt and /diff folders both
and later clean out /mnt folder?
Tested this with Docker version 1.12.5, build 7392c3b on AWS
docker info
Containers: 2
Running: 0
Paused: 0
Stopped: 2
Images: 0
Server Version: 1.12.5
Storage Driver: aufs
Root Dir: /home/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 4
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: overlay bridge null host
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor
Kernel Version: 3.13.0-105-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.859 GiB
Name: master
ID: 2NQU:D2C5:5WPL:IIDR:P6FO:OAG7:GHW6:ZJMQ:VDHI:B5CI:XFZJ:ZSZM
Docker Root Dir: /home/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
@monikakatiyar16 When I manually kill `untar` process during build I get `Error processing tar
file(signal: killed):` in the build output. Leaving behind the
container in `docker ps -a` is the correct behavior, same thing happens in
any build error and lets you debug the problems that caused the build
to fail. I have no problem with deleting this container though and if I
do that all data in `/var/lib/docker/aufs` is cleaned up as well.
@tonistiigi Yes you are correct. I was able to delete the volume
associated with the container and it cleaned up everything, after killing the docker-untar process. Docker system prune also works in this case.
The actual issue which left over volumes was the case, when without killing the docker-untar process, i tried removing the docker container
along with the volumes - which gave the following error :
`docker rm -v -f $(docker ps -a -q)`
`Error response from daemon: Driver aufs failed to remove root
filesystem
97931bf059a0ec219efd3f762dbb173cf9372761ff95746358c08e2b61f7ce79: rename
/home/lib/docker/aufs/diff/359d27c5b608c9dda1170d1e34e5d6c5d90aa2e94826257f210b1442317fad70
/home/lib/docker/aufs/diff/359d27c5b608c9dda1170d1e34e5d6c5d90aa2e94826257f210b1442317fad70-removing:
device or resource busy`
Daemon logs:
`Error removing mounted layer
78fb899aab981557bc2ee48e9738ff4c2fcf2d10a1984a62a77eefe980c68d4a: rename /home/lib/docker/aufs/diff/d2605125ef072de79dc948f678aa94dd6dde562f51a4c0bd08a210d5b2eba5ec
/home/lib/docker/aufs/diff/d2605125ef072de79dc948f678aa94dd6dde562f51a4c0bd08a210d5b2eba5ec-removing:
device or resource busy
ERRO[0956] Handler for DELETE /v1.25/containers/78fb899aab98 returned
error: Driver aufs failed to remove root filesystem 78fb899aab981557bc2ee48e9738ff4c2fcf2d10a1984a62a77eefe980c68d4a:
rename
/home/lib/docker/aufs/diff/d2605125ef072de79dc948f678aa94dd6dde562f51a4c0bd08a210d5b2eba5ec
/home/lib/docker/aufs/diff/d2605125ef072de79dc948f678aa94dd6dde562f51a4c0bd08a210d5b2eba5ec-removing: device or resource busy
ERRO[1028] Error unmounting container
78fb899aab981557bc2ee48e9738ff4c2fcf2d10a1984a62a77eefe980c68d4a: no
such file or directory`
It seems that the order right now to be followed to interrupt a docker
build is :
`Interrupt docker build | Kill docker untar process | remove container and volume : docker rm -v -f $(docker ps -a -q)`
For `docker v1.13.0-rc4`, it can be `Interrupt docker build
| Kill docker untar process | docker system prune -a`
This seems to work perfectly. There are no issues of cleanup, instead
the only issue is the docker-untar process not being killed along with the docker-build process.
I will search/update/log a new issue for graceful interrupt of docker build that also stops docker-untar process along with it.
(Verified this with docker v1.12.5 and v1.13.0-rc4)
Update : On killing docker-untar while Sending build context
to docker daemon, it gives an error in build : `Error response from
daemon: Error processing tar file(signal: terminated)` , but during
layer copy it doesn't (for me)
Thanks for being so patient and for giving your time!
I'm seeing `/var/lib/docker/aufs` consistently increase in size on a
docker swarm mode worker. This thing is mostly
autonomous being managed by the swarm manager and very little manual
container creation aside from some maintenance commands here and there.
I do run `docker exec` on service containers; not sure if that may be a cause.
My workaround to get this resolved in my case was to start up another
worker, set the full node to `--availability=drain` and manually move
over a couple of volume mounts.
```
ubuntu@ip-172-31-18-156:~$ docker --version
Docker version 1.12.3, build 6b644ec
```
This has hit our CI server for ages. This needs to be fixed.
@orf thanks
Same issue here. Neither container, volumes and image removing, nore Docker 1.13 cleanup commands have any effect.
I also confirm I did some image build cancels. Maybe that leaves folders that can't be reacher either.
I'll use the good old rm method for now, but this is clearly a bug.
Files in the /var/lib/docker/aufs/diff fills up 100% space for /dev/sda1 filesystem of 30G
root@Ubuntu:/var/lib/docker/aufs/diff# df -h
Filesystem Size Used Avail Use% Mounted on
udev 14G 0 14G 0% /dev
tmpfs 2.8G 273M 2.5G 10% /run
**/dev/sda1 29G 29G 0 100% /**
tmpfs 14G 0 14G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 14G 0 14G 0% /sys/fs/cgroup
/dev/sdb1 197G 60M 187G 1% /mnt
tmpfs 2.8G 0 2.8G 0% /run/user/1000
du -h -d 1 /var/lib/docker/aufs/diff | grep '[0-9]G\|'
shows
4.1G /var/lib/docker/aufs/diff/a0cde42cbea362bbb2a73ffbf30059bcce7ef0256d1d7e186264f915d15
14G /var/lib/docker/aufs/diff/59aee33d8a607b5315ce103cd99f17b4dfdec73c9a2f3bb2afc7d02bfae
20G /var/lib/docker/aufs/diff
Also tried, **docker system prune**, that did not help.
Has anyone found a solution for this ongoing issue of super large files in diff before this bug is fixed in the code?
Yes, the method has already been given, but here is a apocalypse snippet that just
destroy everything I put in place here at work (except local folders
for the volumes). To put in the bashrc or another bash config file.
```
alias docker-full-cleanup='func_full-cleanup-docker'
func_full-cleanup-docker() {
echo "WARN: This will remove everything from docker: volumes, containers and images. Will you dare? [y/N] "
read choice
if [ \( "$choice" == "y" \) -o \( "$choice" == "Y" \) ]
then
sudo echo "| sudo rights check [OK]"
sizea=`sudo du -sh /var/lib/docker/aufs`
echo "Stopping all running containers"
containers=`docker ps -a -q`
if [ -n "$containers" ]
then
docker stop $containers
fi
echo "Removing all docker images and containers"
docker system prune -f
echo "Stopping Docker daemon"
sudo service docker stop
echo "Removing all leftovers in /var/lib/docker (bug #22207)"
sudo rm -rf /var/lib/docker/aufs
sudo rm -rf /var/lib/docker/image/aufs
sudo rm -f /var/lib/docker/linkgraph.db
echo "Starting Docker daemon"
sudo service docker start
sizeb=`sudo du -sh /var/lib/docker/aufs`
echo "Size before full cleanup:"
echo " $sizea"
echo "Size after full cleanup:"
echo " $sizeb"
fi
}```
I ran rm -rf command to remove the files from the diff folder for now. Probably will have to look into the script if the diff folder occupies the entire dis space again.
Hope to see this issue fixed in the code, instead of work arounds.
Hi, I have same issue in docker 1.10.2, I'm running kubernetes. this is my docker version:
```
Containers: 7
Running: 0
Paused: 0
Stopped: 7
Images: 4
Server Version: 1.10.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 50
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 4.4.0-31-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.954 GiB
Name: ubuntu-k8s-03
ID: NT23:5Y7J:N2UM:NA2W:2FHE:FNAS:56HF:WFFF:N2FR:O4T4:WAHC:I3PO
Debug mode (server): true
File Descriptors: 10
Goroutines: 23
System Time: 2017-02-14T15:25:00.740998058+09:00
EventsListeners: 0
Init SHA1: 3e247d0d32543488f6e70fbb7c806203f3841d1b
Init Path: /usr/lib/docker/dockerinit
Docker Root Dir: /var/lib/docker
WARNING: No swap limit support
```
I'm trying to track all unused diff directory under `/var/lib/docker/aufs/diff` and `/var/lib/docker/aufs/mnt/` by analyzing layer files under `/var/lib/docker/image/aufs/imagedb`, here is the script I used:
https://gist.github.com/justlaputa/a50908d4c935f39c39811aa5fa9fba33
But I met problem when I stop and restart the docker daemon, seems I make some inconsistent status of docker:
/var/log/upstart/docker.log:
```
DEBU[0277] Cleaning up old shm/mqueue mounts: start.
DEBU[0277] Cleaning up old shm/mqueue mounts: done.
DEBU[0277] Clean shutdown succeeded
Waiting for /var/run/docker.sock
DEBU[0000] docker group found. gid: 999
DEBU[0000] Server created for HTTP on unix (/var/run/docker.sock)
DEBU[0000] Using default logging driver json-file
INFO[0000] [graphdriver] using prior storage driver "aufs"
DEBU[0000] Using graph driver aufs
INFO[0000] Graph migration to content-addressability took 0.00 seconds
DEBU[0000] Option DefaultDriver: bridge
DEBU[0000] Option DefaultNetwork: bridge
INFO[0000] Firewalld running: false
DEBU[0000] /sbin/iptables, [--wait -t nat -D PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL ! --dst 127.0.0.0/8 -j DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL -j DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t nat -D PREROUTING]
DEBU[0000] /sbin/iptables, [--wait -t nat -D OUTPUT]
DEBU[0000] /sbin/iptables, [--wait -t nat -F DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t nat -X DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t filter -F DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t filter -X DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t filter -F DOCKER-ISOLATION]
DEBU[0000] /sbin/iptables, [--wait -t filter -X DOCKER-ISOLATION]
DEBU[0000] /sbin/iptables, [--wait -t nat -n -L DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t nat -N DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t filter -n -L DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t filter -n -L DOCKER-ISOLATION]
DEBU[0000] /sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION -j RETURN]
DEBU[0000] /sbin/iptables, [--wait -I DOCKER-ISOLATION -j RETURN]
/var/run/docker.sock is up
DEBU[0000] Registering ipam driver: "default"
DEBU[0000] releasing IPv4 pools from network bridge (dcfcc71060f02440ae53da5ee0f083ca51c33a290565f1741f451754ae6b4257)
DEBU[0000] ReleaseAddress(LocalDefault/10.254.69.0/24, 10.254.69.1)
DEBU[0000] ReleasePool(LocalDefault/10.254.69.0/24)
DEBU[0000] Allocating IPv4 pools for network bridge (159d0a404ff6564b4fcfe633f0c8c123c0c0606d28ec3b110272650c5fc1bcb6)
DEBU[0000] RequestPool(LocalDefault, 10.254.69.1/24, , map[], false)
DEBU[0000] RequestAddress(LocalDefault/10.254.69.0/24, 10.254.69.1, map[RequestAddressType:com.docker.network.gateway])
DEBU[0000] /sbin/iptables, [--wait -t nat -C POSTROUTING -s 10.254.69.0/24 ! -o docker0 -j MASQUERADE]
DEBU[0000] /sbin/iptables, [--wait -t nat -C DOCKER -i docker0 -j RETURN]
DEBU[0000] /sbin/iptables, [--wait -t nat -I DOCKER -i docker0 -j RETURN]
DEBU[0000] /sbin/iptables, [--wait -D FORWARD -i docker0 -o docker0 -j DROP]
DEBU[0000] /sbin/iptables, [--wait -t filter -C FORWARD -i docker0 -o docker0 -j ACCEPT]
DEBU[0000] /sbin/iptables, [--wait -t filter -C FORWARD -i docker0 ! -o docker0 -j ACCEPT]
DEBU[0000] /sbin/iptables, [--wait -t filter -C FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT]
DEBU[0001] /sbin/iptables, [--wait -t nat -C PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]
DEBU[0001] /sbin/iptables, [--wait -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]
DEBU[0001] /sbin/iptables, [--wait -t nat -C OUTPUT -m addrtype --dst-type LOCAL -j DOCKER ! --dst 127.0.0.0/8]
DEBU[0001] /sbin/iptables, [--wait -t nat -A OUTPUT -m addrtype --dst-type LOCAL -j DOCKER ! --dst 127.0.0.0/8]
DEBU[0001] /sbin/iptables, [--wait -t filter -C FORWARD -o docker0 -j DOCKER]
DEBU[0001] /sbin/iptables, [--wait -t filter -C FORWARD -o docker0 -j DOCKER]
DEBU[0001] /sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-ISOLATION]
DEBU[0001] /sbin/iptables, [--wait -D FORWARD -j DOCKER-ISOLATION]
DEBU[0001] /sbin/iptables, [--wait -I FORWARD -j DOCKER-ISOLATION]
WARN[0001] Your kernel does not support swap memory limit.
DEBU[0001] Cleaning up old shm/mqueue mounts: start.
DEBU[0001] Cleaning up old shm/mqueue mounts: done.
DEBU[0001] Loaded container 0790b33ec8e5345ac944d560263b8e13cb75f80dd82cd25753c7320bbcb2747c
DEBU[0001] Loaded container 0e36a6c9319e6b7ca4e5b5408e99d77d51b1f4e825248c039ba0260e628c483d
DEBU[0001] Loaded container 135fb2e8cad26d531435dcd19d454e41cf7aece289ddc7374b4c2a984f8b094a
DEBU[0001] Loaded container 2c28de46788ce96026ac8e61e99c145ec55517543e078a781e8ce6c8cddec973
DEBU[0001] Loaded container 35eb075b5815e621378eb8a7ff5ad8652819ec851eaa4f7baedb1383dfa51a57
DEBU[0001] Loaded container 6be37a301a8f52040adf811041c140408224b12599aa55155f8243066d2b0b69
DEBU[0001] Loaded container d98ac7f052fef31761b82ab6c717760428ad5734df4de038d80124ad5b5e8614
DEBU[0001] Starting container
2c28de46788ce96026ac8e61e99c145ec55517543e078a781e8ce6c8cddec973
ERRO[0001] Couldn't run auplink before unmount: exit status 22
ERRO[0001] error locating sandbox id
d4c538661db2edc23c79d7dddcf5c7a8886c9477737888a5fc2641bc5e66da8b:
sandbox d4c538661db2edc23c79d7dddcf5c7a8886c9477737888a5fc2641bc5e66da8b
not found
WARN[0001] failed to cleanup ipc mounts:
failed to umount
/var/lib/docker/containers/2c28de46788ce96026ac8e61e99c145ec55517543e078a781e8ce6c8cddec973/shm:
invalid argument
ERRO[0001] Failed to start container 2c28de46788ce96026ac8e61e99c145ec55517543e078a781e8ce6c8cddec973: error creating aufs mount to /var/lib/docker/aufs/mnt/187b8026621da2add42330c9393a474fcd9af2e4567596d61bcd7a40c85f71da: invalid argument
INFO[0001] Daemon has completed initialization
INFO[0001] Docker daemon commit=c3959b1 execdriver=native-0.2 graphdriver=aufs version=1.10.2
DEBU[0001] Registering routers
DEBU[0001] Registering HEAD, /containers/{name:.*}/archive
```
and when I try to create new containers by `docker run`, it failed with message:
```
docker: Error response from daemon: error creating aufs mount to /var/lib/docker/aufs/mnt/f9609c0229baa2cdc6bc07c36970ef4f192431c1b1976766b3ea23d72c355df3-init: invalid argument.
See 'docker run --help'.
```
and the daemon log shows:
```
DEBU[0173] Calling POST /v1.22/containers/create
DEBU[0173] POST /v1.22/containers/create
DEBU[0173] form data: {"AttachStderr":false,"AttachStdin":false,"AttachStdout":false,"Cmd":["/hyperkube","kubelet","--api-servers=http://localhost:8080","--v=2","--address=0.0.0.0","--enable-server","--hostname-override=172.16.210.87","--config=/etc/kubernetes/manifests-multi","--cluster-dns=10.253.0.10","--cluster-domain=cluster.local","--allow_privileged=true"],"Domainname":"","Entrypoint":null,"Env":[],"HostConfig":{"Binds":["/sys:/sys:ro","/dev:/dev","/var/lib/docker/:/var/lib/docker:rw","/var/lib/kubelet/:/var/lib/kubelet:rw","/var/run:/var/run:rw","/etc/kubernetes/manifests-multi:/etc/kubernetes/manifests-multi:ro","/:/rootfs:ro"],"BlkioDeviceReadBps":null,"BlkioDeviceReadIOps":null,"BlkioDeviceWriteBps":null,"BlkioDeviceWriteIOps":null,"BlkioWeight":0,"BlkioWeightDevice":null,"CapAdd":null,"CapDrop":null,"CgroupParent":"","ConsoleSize":[0,0],"ContainerIDFile":"","CpuPeriod":0,"CpuQuota":0,"CpuShares":0,"CpusetCpus":"","CpusetMems":"","Devices":[],"Dns":[],"DnsOptions":[],"DnsSearch":[],"ExtraHosts":null,"GroupAdd":null,"IpcMode":"","Isolation":"","KernelMemory":0,"Links":null,"LogConfig":{"Config":{},"Type":""},"Memory":0,"MemoryReservation":0,"MemorySwap":0,"MemorySwappiness":-1,"NetworkMode":"host","OomKillDisable":false,"OomScoreAdj":0,"PidMode":"host","PidsLimit":0,"PortBindings":{},"Privileged":true,"PublishAllPorts":false,"ReadonlyRootfs":false,"RestartPolicy":{"MaximumRetryCount":0,"Name":"always"},"SecurityOpt":null,"ShmSize":0,"UTSMode":"","Ulimits":null,"VolumeDriver":"","VolumesFrom":null},"Hostname":"","Image":"gcr.io/google_containers/hyperkube:v1.1.8","Labels":{},"NetworkingConfig":{"EndpointsConfig":{}},"OnBuild":null,"OpenStdin":false,"StdinOnce":false,"StopSignal":"SIGTERM","Tty":false,"User":"","Volumes":{},"WorkingDir":""}
ERRO[0173] Couldn't run auplink before unmount: exit status 22
ERRO[0173] Clean up Error! Cannot destroy container 482957f3e4e92a0ba56d4787449daa5a8708f3b77efe0c603605f35d02057566: nosuchcontainer: No such container: 482957f3e4e92a0ba56d4787449daa5a8708f3b77efe0c603605f35d02057566
ERRO[0173] Handler for POST /v1.22/containers/create returned error: error creating aufs mount to /var/lib/docker/aufs/mnt/f9609c0229baa2cdc6bc07c36970ef4f192431c1b1976766b3ea23d72c355df3-init: invalid argument
```
does anyone know whether my approach is correct or not? and why the problem happens after I delete those folders?
I opened #31012 to at least make sure we don't leak these dirs in any circumstances.
We of course also need to look at the various causes of the `busy` errors
This was biting me as long as I can remember. I got pretty
much the same results as described above when I switched to `overlay2`
driver some days ago and nuked the aufs folder completely (`docker
system df` says 1.5Gigs, `df` says 15Gigs).
I had about 1T of diffs using storage. After restarting my docker daemon - I recovered about 700GB. So I guess stopping the daemon prunes these?
Restarting does nothing for me, unfortunately.
Service restart did not help. This is a serious issue. Removing all container/images does not remove those diffs.
Stopping the daemon would not prune these.
If you remove all containers and you still have `diff` dirs, then likely
you have some leaked rw layers.
We just encountered this issue. `/var/lib/docker/aufs/diff` took up 28G
and took our root filesystem to 100%, which caused our GitLab server to
stop responding. We're using docker for GitLab CI. To fix this, I used some of the commands @sogetimaitral suggested
above to delete the temp files, and we're back up and running. I
restarted the server and sent in a new commit to trigger CI, and
everything appears to be working just as it should.
I'm definitely concerned this is going to happen again. What's the deal here? Is this a docker bug that needs to be fixed?
1. Yes there is a bug (both that there are issues on removal and that --force on rm ignores these issues)
2. Generally one should not be writing lots of data to the container fs and instead use a volume (even a throw-away volume). A large diff dir would indicate that there is significant
amounts of data being written to the container fs.
If you don't use "--force" on remove you would not run into this issue
(or at least you'd see you have a bunch of "dead" containers and know
how/what to clean up.).
I'm not manually using docker at all. We're using
[gitlab-ci-multi-runner](https://gitlab.com/gitlab-org/gitlab-ci-multi-runner).
Could it be a bug on GitLab's end then?
It looks like (by default) it force-removes containers;
https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/dbdbce2848530df299836768c8ea01e209a2fe40/executors/docker/executor_docker.go#L878. Doing so can result in failures to remove the container being ignored, and leading to the orphaned diffs.
Ok, then that tells me that this is a gitlab-ci-multi-runner bug. Is that a correct interpretation? I'm happy to create an issue for them to fix this.
It's a combination
I guess; "force" remove makes it easier to handle cleanups (i.e., cases
where a container isn't stopped yet, etc), at the same time (that's the
"bug" @cpuguy83 mentioned), it can also hide actual issues, such as
docker failing to remove the containers filesystem (which can have
various reasons). With "force", the container is removed in such cases.
Without, the container is left around (but marked "dead")
If the gitlab runner can function correctly without the force remove, that'll probably be good to change (or make it configurable)
I am using [Drone](https://github.com/drone/drone) and have the same issue. I didn't check the code how containers are removed, but i guess it force removes as well.
Could it be a Docker in Docker issue? I am starting Drone with docker-compose.
I decided to go ahead and submit a gitlab-ci-multi-runner issue just to loop the devs in: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/2304
Honestly we worked around this by running Spotify's docker gc with drone CI.
El El mar, mar. 28, 2017 a las 3:38 PM, Geoffrey Fairchild |
notifications@github.com| escribió:
| I decided to go ahead and submit a gitlab-ci-multi-runner issue just to
| loop the devs in:
| https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/2304
|
| —
| You are receiving this because you commented.
| Reply to this email directly, view it on GitHub
| |https://github.com/docker/docker/issues/22207#issuecomment-289926298|,
| or mute the thread
| |https://github.com/notifications/unsubscribe-auth/AC6Wz197zkjWWOlq1-JOibiQP-xJym9Eks5rqYvegaJpZM4IMGt2|
|
.
|
@sedouard Thanks for this tip! Running
[docker-gc](https://github.com/spotify/docker-gc) hourly from spotify
solved the problem for me.
We are getting this issue running from Gitlab CI (not running in
docker), using commands to build images / run containers, (not Gitlab CI
Docker integration). We are not running any form of force removal,
simply `docker run --rm ...` and `docker rmi image:tag`
**EDIT**: sorry, actually the original problem is the same. The difference is that running `spotify/docker-gc` does _not_ fix the problem.
----
As you can see below, I have 0 images, 0 containers, nothing!
`docker system info` agrees with me, but mentions `Dirs: 38` for the aufs storage.
That's suspicious!
If you look at `/var/lib/docker/aufs/diff/`, we see that there's
actually 1.7 GB of data there, over 41 directories. And that's my
personal box, on the production server it's 19 GB.
How do we clean this? using `spotify/docker-gc` does not remove these.
``` shell
philippe@pv-desktop:~$ docker images -a
REPOSITORY TAG IMAGE ID CREATED
SIZE
philippe@pv-desktop:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
philippe@pv-desktop:~$ docker system info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 17.03.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 38
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-72-generic
Operating System: Ubuntu 16.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 31.34 GiB
Name: pv-desktop
ID: 2U5D:CRHS:RUQK:YSJX:ZTRS:HYMV:HO6Q:FDKE:R6PK:HMUN:2EOI:RUWO
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: silex
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
philippe@pv-desktop:~$ ls -alh /var/lib/docker/aufs/diff/
total 276K
drwxr-xr-x 40 root root 116K Apr 13 15:32 .
drwxr-xr-x 5 root root 4.0K Sep 18 2015 ..
drwxr-xr-x 4 root root 4.0K Jun 17 2016 005d00efb0ba949d627ad439aec8c268b5d55759f6e92e51d7828c12e3817147
drwxr-xr-x 8 root root 4.0K May 2 2016 0968e52874bbfaa938ffc869cef1c5b78e2d4f7a670e19ef47f713868b9bfbdf
drwxr-xr-x 4 root root 4.0K Jun 20 2016 188233e6dcc37e2308e69807ffd19aca3e61be367daae921f2bcb15a1d6237d0
drwxr-xr-x 6 root root 4.0K Jun 20 2016 188233e6dcc37e2308e69807ffd19aca3e61be367daae921f2bcb15a1d6237d0-init
drwxr-xr-x 21 root root 4.0K Apr 8 2016 250ecb97108a6d8a8c41f9d2eb61389a228c95f980575e95ee61f9e8629d5180
drwxr-xr-x 2 root root 4.0K Dec 22 2015 291f16f99d9b0bc05100e463dbc007ef816e0cf17b85d20cf51da5eb2b866810
drwxr-xr-x 2 root root 4.0K May 2 2016 3054baaa0b4a7b52da2d25170e9ce4865967f899bdf6d444b571e57be141b712
drwxr-xr-x 2 root root 4.0K Feb 5 2016 369aca82a5c05d17006b9dca3bf92d1de7d39d7cd908ed665ef181649525464e
drwxr-xr-x 3 root root 4.0K Jun 17 2016 3835a1d1dfe755d9d1ada6933a0ea7a4943caf8f3d96eb3d79c8de7ce25954d2
(...strip)
philippe@pv-desktop:~$ du -hs /var/lib/docker/aufs/diff/
1.7G /var/lib/docker/aufs/diff/
philippe@pv-desktop:~$ docker system prune -a
WARNING! This will remove:
- all stopped containers
- all volumes not used by at least one container
- all networks not used by at least one container
- all images without at least one container associated to them
Are you sure you want to continue? [y/N] y
Total reclaimed space: 0 B
```
Can I safely `rm -r /var/lib/docker/aufs` and restart the docker deamon?
Running `spotify/docker-gc`
does not clean those orphans.
**EDIT**: thanks @CVTJNII!
Stopping the Docker daemon and erasing all of /var/lib/docker will be
safer. Erasing /var/lib/docker/aufs will cause you to lose your images
anyway so it's better to start with a clean /var/lib/docker in my
opinion. This is the "solution" I've been using for several months for
this problem now.
Starting with 17.06 there should no longer be any *new* orphaned diffs.
Instead you may start seeing containers with the state `Dead`, this
happens if there was an error during removal that is non-recoverable and
may require an admin to deal with it.
In addition, removal is a bit more robust, and
less prone to error due to race conditions or failed unmounts.
@cpuguy83: great news, can you explain what the admin would need to do
if that happens?
@Silex It depends on the cause.
Typically what has happened is there is a `device or resource busy`
error due to some mount being leaked into a container. If you are
running something like cadvisor this is pretty much a guarantee as the
instructions say to mount the whole docker dir into the cadvisor
container.
This *can* be tricky, you may have to stop the offending container(s)
and then remove the `dead` container.
If you are on a newer kernel (3.15+) it is unlikely that you would see
the issue anymore, though there still may be some edge case.
Docker version 17.06.0-ce, build 02c1d87
I tried remove all images, volumes, networks and containers but it not
helped.
Also tried commands:
```
docker system prune -af
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v
/etc:/etc:ro spotify/docker-gc
```
Still remain files:
```
root@Dark:/var/lib/docker/aufs# ls -la *
diff:
total 92
drwx------ 12 root root 45056 Jul 28 17:28 .
drwx------ 5 root root 4096 Jul 9 00:18 ..
drwxr-xr-x 4 root root 4096 Jul 10 01:35
78f8ecad2e94fedfb1ced425885fd80bb8721f9fd70715de2ce373986785b882
drwxr-xr-x 6 root root 4096 Jul 10 01:35
78f8ecad2e94fedfb1ced425885fd80bb8721f9fd70715de2ce373986785b882-init
drwxr-xr-x 5 root root 4096 Jul 10 01:35
7caa9688638ea9669bac451b155b65b121e99fcea8d675688f0c76678ba85ccd
drwxr-xr-x 6 root root 4096 Jul 10 01:35
7caa9688638ea9669bac451b155b65b121e99fcea8d675688f0c76678ba85ccd-init
drwxr-xr-x 4 root root 4096 Jul 12 14:45
b7b7770aae461af083e72e5e3232a62a90f934c83e38830d06365108e302e7ac
drwxr-xr-x 6 root root 4096 Jul 12 14:45
b7b7770aae461af083e72e5e3232a62a90f934c83e38830d06365108e302e7ac-init
drwxr-xr-x 4 root root 4096 Jul 10 01:35
d5752b27b341e17e730d3f4acbec04b10e41dc01ce6f9f98ff38208c0647f2e4
drwxr-xr-x 6 root root 4096 Jul 10 01:35
d5752b27b341e17e730d3f4acbec04b10e41dc01ce6f9f98ff38208c0647f2e4-init
drwxr-xr-x 6 root root 4096 Jul 10 01:35
e412d3c6f0f5f85e23d7a396d47c459f5d74378b474b27106ab9b82ea829dbfb
drwxr-xr-x 6 root root 4096 Jul 10 01:35
e412d3c6f0f5f85e23d7a396d47c459f5d74378b474b27106ab9b82ea829dbfb-init
layers:
total 52
drwx------ 2 root root 45056 Jul 28 17:28 .
drwx------ 5 root root 4096 Jul 9 00:18 ..
-rw-r--r-- 1 root root 0 Jul 10 01:35
78f8ecad2e94fedfb1ced425885fd80bb8721f9fd70715de2ce373986785b882
-rw-r--r-- 1 root root 0 Jul 10 01:35
78f8ecad2e94fedfb1ced425885fd80bb8721f9fd70715de2ce373986785b882-init
-rw-r--r-- 1 root root 0 Jul 10 01:35
7caa9688638ea9669bac451b155b65b121e99fcea8d675688f0c76678ba85ccd
-rw-r--r-- 1 root root 0 Jul 10 01:35
7caa9688638ea9669bac451b155b65b121e99fcea8d675688f0c76678ba85ccd-init
-rw-r--r-- 1 root root 0 Jul 12 14:45
b7b7770aae461af083e72e5e3232a62a90f934c83e38830d06365108e302e7ac
-rw-r--r-- 1 root root 0 Jul 12 14:45
b7b7770aae461af083e72e5e3232a62a90f934c83e38830d06365108e302e7ac-init
-rw-r--r-- 1 root root 0 Jul 10 01:35 d5752b27b341e17e730d3f4acbec04b10e41dc01ce6f9f98ff38208c0647f2e4
-rw-r--r-- 1 root root 0 Jul 10 01:35 d5752b27b341e17e730d3f4acbec04b10e41dc01ce6f9f98ff38208c0647f2e4-init
-rw-r--r-- 1 root root 0 Jul 10 01:35 e412d3c6f0f5f85e23d7a396d47c459f5d74378b474b27106ab9b82ea829dbfb
-rw-r--r-- 1 root root 0 Jul 10 01:35 e412d3c6f0f5f85e23d7a396d47c459f5d74378b474b27106ab9b82ea829dbfb-init
mnt:
total 92
drwx------ 12 root root 45056 Jul 28 17:28 .
drwx------ 5 root root 4096 Jul 9 00:18 ..
drwxr-xr-x 2 root root 4096 Jul 10 01:35 78f8ecad2e94fedfb1ced425885fd80bb8721f9fd70715de2ce373986785b882
drwxr-xr-x 2 root root 4096 Jul 10 01:35 78f8ecad2e94fedfb1ced425885fd80bb8721f9fd70715de2ce373986785b882-init
drwxr-xr-x 2 root root 4096 Jul 10 01:35 7caa9688638ea9669bac451b155b65b121e99fcea8d675688f0c76678ba85ccd
drwxr-xr-x 2 root root 4096 Jul 10 01:35 7caa9688638ea9669bac451b155b65b121e99fcea8d675688f0c76678ba85ccd-init
drwxr-xr-x 2 root root 4096 Jul 12 14:45 b7b7770aae461af083e72e5e3232a62a90f934c83e38830d06365108e302e7ac
drwxr-xr-x 2 root root 4096 Jul 12 14:45 b7b7770aae461af083e72e5e3232a62a90f934c83e38830d06365108e302e7ac-init
drwxr-xr-x 2 root root 4096 Jul 10 01:35 d5752b27b341e17e730d3f4acbec04b10e41dc01ce6f9f98ff38208c0647f2e4
drwxr-xr-x 2 root root 4096 Jul 10 01:35 d5752b27b341e17e730d3f4acbec04b10e41dc01ce6f9f98ff38208c0647f2e4-init
drwxr-xr-x 2 root root 4096 Jul 10 01:35 e412d3c6f0f5f85e23d7a396d47c459f5d74378b474b27106ab9b82ea829dbfb
drwxr-xr-x 2 root root 4096 Jul 10 01:35 e412d3c6f0f5f85e23d7a396d47c459f5d74378b474b27106ab9b82ea829dbfb-init
```
```
# docker system
df
TYPE TOTAL ACTIVE SIZE
RECLAIMABLE
Images 0 0 0B
0B
Containers 0 0 0B
0B
Local Volumes 0 0 0B
0B
```
How can it be deleted?
@haos616 try stopping all running containers first, and then run `docker
system prune -af`.
This did the trick for me.
Didn't work while I had a container running.
If it's an upgrade from a previous version of docker, it's possible
those diffs were generated / left behind by that version. Docker 17.06
won't remove a container if layers failed to be removed (when using
--force); older versions did, which could lead to orphaned layers
@julian-pani I did so in the beginning but it does not help.
```
# docker system df
TYPE TOTAL ACTIVE SIZE
RECLAIMABLE
Images 0 0 0B
0B
Containers 0 0 0B
0B
Local Volumes 0 0
0B 0B
```
@thaJeztah No. I cleaned the Docker one or two months ago. Then the
version was already 17.06. I used command `docker system prune -af`. It
removed everything.
Running https://github.com/spotify/docker-gc as a container worked for
me, but it went a step extra and deleted some of my required images too
:(
So I've put a small wrapper script as below to be safe
```
#!/bin/sh
docker images -q | /etc/docker-gc-exclude # Save all genuine images
as exclude
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v /etc:/etc:ro spotify/docker-gc
```
thanks again to spotify
IIUC, the spotify
script just calls `docker rm` and `docker rmi` - did it actually remove
orphaned diffs?
Just some feedback for the community, I've read through all of this and
none of the solutions actually seem to work consistently or reliably.
My "fix" was simply to double the amount of disc space on my AWS
instances. And I know all too well that's a crappy fix but it is the
best workaround I've found to Docker's bloated aufs. This really,
really needs to be fixed.
@fuzzygroup 17.06 should no longer create orphaned diffs, but it won't
clean up the old ones yet.
I could cleanup with this script. I don't see why it wouldn't work, but
who knows.
Anyway it's working fine for me. It will delete all images, containers, and volumes... As it should not run very often, I find it a minor side effect. But it's up to you to use it or not.
https://gist.github.com/Karreg/84206b9711cbc6d0fbbe77a57f705979
https://stackoverflow.com/q/45798076/562769 seems to be related. I've posted a quick fix.
FYI, still seeing this with `17.06.1-ce`
```
Containers: 20
Running: 0
Paused: 0
Stopped: 20
Images: 124
Server Version: 17.06.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 185
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
apparmor
Kernel Version: 4.4.0-83-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 7.796GiB
Name: gitlab-cirunner
ID: PWLR:R6HF:MK3Y:KN5A:AWRV:KHFY:F36D:WASF:7K7B:U7FY:2DJA:DBE2
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
```
`/var/lib/docker/aufs/diff` contains lots of directories with the `-init-removing` and `-removing` prefix:
```
ffd5477de24b0d9993724e40175185038a62250861516030a33280898243e742-init-removing
ffd5477de24b0d9993724e40175185038a62250861516030a33280898243e742-removing
ffd900de0634992e99c022a16775805dfd0ffd1f6c89fece7deb6b1a71c5e38c-init-removing
ffd900de0634992e99c022a16775805dfd0ffd1f6c89fece7deb6b1a71c5e38c-removing
```
| FYI, still seeing this with 17.06.1-ce
Still seeing what, exactly?
There should not be any way that a diff dir can leak, though diff dirs
will still exist if they existed on upgrade, they'll still exist.
Still seeing orphaned diffs as far as I can tell. `docker system prune`
didn't remove them, neither did
`docker-gc`. Manually running `rm -rf
/var/lib/docker/aufs/diff/*-removing` seems to be working.
Yes, docker will not clean up old orphaned dirs yet.
By old you mean those created from a previous version of docker with
this issue?
This is a fresh install of Docker we did about two weeks ago, those
orphans must have been created since then, so it seems that docker must
still be creating those orphans?
I mean, in the last half an hour I've got `112` new diffs with
`-removing`, since I rm'ed them manually.
```
$ ls /var/lib/docker/aufs/diff/ | grep removing | wc -l
112
```
You said "17.06 should no
longer create orphaned diffs, but it won't clean up the old ones yet.",
but surely this cannot be correct, or am I missing something? Are those
tagged with `-removing` not orphaned?
@orf On a newer kernel, it's highly unexpected to have any issue
at all during removal. Are you mounting `/var/lib/docker` into a
container?
I'll check in the aufs driver to see if there's a specific issue there
with it reporting a successful remove when it really wasn't.
We are not mounting `/var/lib/docker` into a container.
```
$ uname -a
Linux gitlab-cirunner 4.4.0-83-generic #106~14.04.1-Ubuntu SMP Mon Jun 26 18:10:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
```
We are running 14.04 LTS
Let me know if there is anything I can provide to help debug this.
For other reasons (swarm mode networking) I moved off 14.04 for Docker
machines.
On Mon, Aug 21, 2017 at 8:23 AM orf |notifications@github.com| wrote:
| We are not mounting /var/lib/docker into a container.
|
| $ uname -a
| Linux gitlab-cirunner 4.4.0-83-generic #106~14.04.1-Ubuntu SMP Mon Jun 26 18:10:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
|
| We are running 14.04 LTS
|
| —
| You are receiving this because you commented.
| Reply to this email directly, view it on GitHub
| |https://github.com/moby/moby/issues/22207#issuecomment-323773033|, or mute
| the thread
| |https://github.com/notifications/unsubscribe-auth/AADRIfE2B2HNpbsKyTOj1CwGzulRT2C0ks5saaDPgaJpZM4IMGt2|
|
.
|
This appears to be worse with 17.06.01-ce. I updated a build machine to
this version and immediately started seeing the `*-init-removing` and
the `*-removing` directories left around as part of the build process. I stopped the service, removed the `/var/lib/docker`
directory, restarted the service and after a few builds was close to
out of disk space again. I stopped the service again, ran `apt-get purge docker-ce`, removed `/var/lib/docker` again and installed the 17.06.0-ce version. Not getting the extra directories in `/var/lib/docker/aufs/diff` and disk space is representative of images
that are on the build machine. I've reproduced the behavior on my
development machine as well - just building an image seems to create these extra directories for each layer of the image so I would run out of disk space really quick. Again, reverting to 17.06.0-ce seems to not have the problem so I'm going to stay there for now.
@mmanderson Thanks for reporting. Taking a look at changes in the AUFS driver.
@mmanderson Do you have any containers in the `Dead` state in `docker ps -a`?
All of my docker build servers are running out of space.

I have upgraded within the last week or so to Docker version 17.06.1-ce, build 874a737.
I believe that nothing else has changed and that this issue either
emerged or manifested as part of the upgrade process. The aufs diff directory is massive and I already pruned all images and dangling volumes.
[issue-22207.txt](https://github.com/moby/moby/files/1240116/issue-22207.txt)
@cpuguy83 No containers in any state. Here is what I just barely did to demonstrate this with 17.06.01-ce:
1. Started with a fresh install of docker 17.06.01-ce on Ubuntu 16.04.03 LTS (i.e. docker not
installed and no /var/lib/docker directory). After install verified an
empty /var/lib/docker/aufs/diff directory.
2. Ran a docker build with a fairly simple dockerfile based on
ubuntu:latest - all it does is pull statsd_exporter from github and
extract it into /usr/bin (see attached file).
3. After
running the build run `docker ps -a` to show no containers in any state.
There are several `*-remaining` folder in the
`/var/lib/docker/aufs/diff` folder.
4. Run `docker system df` to verify images, container, and volumes.
Result is
```
TYPE TOTAL ACTIVE SIZE
RECLAIMABLE
Images 2 0 132.7MB
132.7MB (100%)
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
```
6. Running `du -sch /var/lib/docker/*/` shows 152M for
`/var/lib/docker/aufs/`
7. Run `docker rmi $(docker images -q)` to remove the built image
layers. Running `docker system df` after this shows all zeros. Running
`du -sch /var/lib/docker/*/`
shows 152M for `/var/lib/docker/aufs/` and there are `*-remaining`
folders for all of the folders that didn't have them before along with
the existing `*-remaining` folders that are still there.
@erikh is this the issue you are experiencing?
@cpuguy83 After uninstalling 17.06.01-ce, removing the /var/lib/docker
directory and installing 17.06.0-ce I try to run the same build. The
build fails because of the `ADD from remote URL's` bug that was fixed in
17.06.01. However I don't get any `*-remaining` directories for the
steps that do complete and after cleaning up everything with `docker
system prune` and `docker rmi $(docker image -q)` the
`/var/lib/docker/aufs/diff` directory is again empty and the space is
freed.
Thanks all, this is a regression in 17.06.1...
PR to fix is here: https://github.com/moby/moby/pull/34587
awesome, thanks for the quick patch @cpuguy83! /cc @erikh
@rogaha! yes, thanks to you and @cpuguy83!
Thank you so much @Karreg for your [excellent
script](https://github.com/moby/moby/issues/22207#issuecomment-322707352).
After getting rid of all the old ophaned diffs and freeing huge amounts
of lost disk space again we are using it now regularily to clean our
VMs before installing new docker images. Great help and an almost
perfect workaround for this issue now. @TP75
Looks like Docker, Inc. have some contracts with computer data storage manufacturers.
@Karreg's script worked fine for me and I freed all the space in the diffs directory.
Having the same issue.
Docker Host Details
root@UbuntuCont:~# docker info
Containers: 3
Running: 0
Paused: 0
Stopped: 3
Images: 4
Server Version: 17.06.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 14
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-93-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 3.358GiB
Name: UbuntuCont
ID: QQA5:DC5S:C2FL:LCC6:XY6E:V3FR:TRW3:VMOQ:QQKD:AP2M:H3JA:I6VX
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
root@UbuntuCont:/var/lib/docker/aufs/diff# ls
031c85352fe85f07fede77dee0ac9dc2c7723177a819e72c534e1399208c95fa
09d53040e7e6798b5987ea76fe4f84f0906785b94a392a72e8e41a66cd9f242d
09d53040e7e6798b5987ea76fe4f84f0906785b94a392a72e8e41a66cd9f242d-init
0fb1ffc90969e9706801e2a18870f3ecd857a58f1094fbb968b3fa873e4cf2e4
10549179bd21a9c7af018d4ef305bb9196413b9662fce333b607104c40f38781
10d86a48e03cabf9af2c765dc84824809f24674ac339e4b9ffe572f50bd26b9c-init-removing
10d86a48e03cabf9af2c765dc84824809f24674ac339e4b9ffe572f50bd26b9c-removing
2e226946e8e6c2b3613de2afcff4cbb9890b6d9bd365fdda121a51ae96ec5606
2e226946e8e6c2b3613de2afcff4cbb9890b6d9bd365fdda121a51ae96ec5606-init
3601f6953132f557df8b52e03016db406168d3d6511d7ff5c08a90925ea288da-init-removing
3601f6953132f557df8b52e03016db406168d3d6511d7ff5c08a90925ea288da-removing
4b29141243aea4e70472f25a34a91267ab19c15071862c53e903b99740603d4c-init-removing
4b29141243aea4e70472f25a34a91267ab19c15071862c53e903b99740603d4c-removing
520e3fcf82e0fbbb48236dd99b6dee4c5bb9073d768511040c414f205c787dc5-init-removing
520e3fcf82e0fbbb48236dd99b6dee4c5bb9073d768511040c414f205c787dc5-removing
59cbb25a4858e7d3eb9146d64ff7602c9abc68509b8f2ccfe3be76681481904f
5d1c661b452efce22fe4e109fad7a672e755c64f538375fda21c23d49e2590f6
605893aba54feee92830d56b6ef1105a4d2166e71bd3b73a584b2afc83319591
63bd53412210f492d72999f9263a290dfee18310aa0494cb92e0d926d423e281-init-removing
63bd53412210f492d72999f9263a290dfee18310aa0494cb92e0d926d423e281-removing
72146e759ab65c835e214e99a2037f4b475902fdbe550c46ea0d396fb5ab2779-init-removing
72146e759ab65c835e214e99a2037f4b475902fdbe550c46ea0d396fb5ab2779-removing
8147e0b06dcbce4aa7eb86ed74f4ee8301e5fe2ee73c3a80dcb230bd0ddfcc26-init-removing
8147e0b06dcbce4aa7eb86ed74f4ee8301e5fe2ee73c3a80dcb230bd0ddfcc26-removing
a72735551217bb1ad01b77dbdbb9b8effa9f41315b0c481f8d74b5606c50deb4
aa58f2000b9f7d1ed2a6b476740c292c3c716e1d4dc04b7718580a490bba5ee8
b552cb853e33a8c758cb664aec70e2c4e85eacff180f56cbfab988a8e10c0174-removing
cd80c351b81ed13c4b64d9dfdc20c84f6b01cbb3e26f560faf2b63dae12dec55-init-removing
cd80c351b81ed13c4b64d9dfdc20c84f6b01cbb3e26f560faf2b63dae12dec55-removing
fe903be376821b7afee38a016f9765136ecb096c59178156299acb9f629061a2
fe903be376821b7afee38a016f9765136ecb096c59178156299acb9f629061a2-init
@kasunsjc please read the posts just above yours.
I confirm upgrading to 17.06.2-ce solved this issue. I didn't have to
manually the directories either (last time) after the upgrade.
17.06.2-ce _appears_ to have fixed this for me as well. No more
`-removing` directories in there, got a decent amount of space back.
I'm assuming that the `-init` directories I have in `aufs/diff` are
unrelated (some of them are pretty old). They are all small, though, so
it hardly matters.
Updating to 17.07.0 solved the issue here too, not even `docker system
prune --all -f` would remove the directories before but after upgrading
they got autoremoved on reboot.
Confirming this issue was resolved on Ubuntu 16.04 with 17.06.2-ce. As
soon as it was updated, the space cleared.
(250, 512) todo (2250,)
(500, 512) todo (2000,)
(750, 512) todo (1750,)
(1000, 512) todo (1500,)
(1250, 512) todo (1250,)
(1500, 512) todo (1000,)
(1750, 512) todo (750,)
(2000, 512) todo (500,)
(2250, 512) todo (250,)
(2500, 512) todo (0,)
True class: unrelated
Text with highlighted words
I'd like to know why docker uses so much disk, even after removing _all_ containers, images, and volumes.
It looks like this "diff" has a layer, but the layer isn't referenced by anything at all.
```
/var/lib/docker/aufs/diff# du-summary
806628 c245c4c6d71ecdd834974e1e679506d33c4aac5f552cb4b28e727a596efc1695-removing
302312 a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
302304 957e78f9f9f4036689734df16dabccb98973e2c3de0863ef3f84de85dca8d92d
302256 8db1d610f3fbc71415f534a5d88318bbd2f3f783375813f2288d15f15846d312
288204 ac6b8ff4c0e7b91230ebf4c1caf16f06c1fdceff6111fd58f4ea50fc2dd5050b
288180 04a478c413ea80bcfa7f6560763beef991696eace2624254479e5e5dd69708c6
287804 d033ab6e4e5231dc46c6c417c680b239bb0e843024738517cbb0397128e166ca
233420 8e21143dca49e30cae7475b71b5aee9b92abe2069fbb9ab98ce9c334e3f6d4fa
212668 a631b94f7a2d5d21a96a78e9574d39cdeebbc81b51ac6c58bd48dc4045656477
205120 ae13341f8c08a925a95e5306ac039b0e0bbf000dda1a60afb3d15c838e43e349
205120 8d42279017d6095bab8d533ab0f1f7de229aa7483370ef53ead71fe5be3f1284
205116 59b3acd8e0cfd194d44313978d4b3769905cdb5204a590069c665423b10150e3
205116 040af0eee742ec9fb2dbeb32446ce44829cd72f02a2cf31283fcd067e73798ab
158024 ef0a29ff0b515c8c57fe78bcbd597243de9f7b274d9b212c774d91bd45a6c9b1
114588 061bd7e021afd4aaffa9fe6a6de491e10d8d37d9cbe7612138f58543e0985280
114576 149e8d2745f6684bc2106218711991449c452d4c7e6203e2a0f46651399162b0
114532 52b28112913abb0ed1b3267a0baa1cacd022ca6611812d0a8a428e61ec399589
114300 52475beba19687a886cba4bdb8508d5aaf051ceb52fb3a65294141ab846c8294
76668 4e6afb958b5ee6dea6d1a886d19fc9c780d4ecc4baeebfbde31f9bb97732d10d
76640 c61340c6a962ddd484512651046a676dbbc6a5d46aecc26995c49fe987bf9cdc
/var/lib/docker/aufs/diff# du -hs a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
296M a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
$ docker-find a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
+ docker=/var/lib/docker
+ sudo find /var/lib/docker '(' -path '/var/lib/docker/aufs/diff/*' -o -path '/var/lib/docker/aufs/mnt/*' ')' -prune -o -print
+ grep a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
/var/lib/docker/aufs/layers/a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
+ sudo find /var/lib/docker '(' -path '/var/lib/docker/aufs/diff/*' -o -path '/var/lib/docker/aufs/mnt/*' ')' -prune -o -type f -print0
+ sudo xargs -0 -P20 grep -l a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
/var/lib/docker/aufs/layers/993e4988c510ec3ab4f6d139740a059df40585576f8196817e573a9684554c5c
/var/lib/docker/aufs/layers/95e68d59a8704f2bb52cc1306ca910ddb7af8956eb7c57970fcf7d8b3d9baddb
/var/lib/docker/aufs/layers/4e6afb958b5ee6dea6d1a886d19fc9c780d4ecc4baeebfbde31f9bb97732d10d
/var/lib/docker/aufs/layers/fd895b6f56aedf09c48dba97931a34cea863a21175450c31b6ceadde03f7b3da
/var/lib/docker/aufs/layers/ac6b8ff4c0e7b91230ebf4c1caf16f06c1fdceff6111fd58f4ea50fc2dd5050b
/var/lib/docker/aufs/layers/f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579-init
/var/lib/docker/aufs/layers/d5bbef5adf2efb6f15d4f96c4bee21beb955255d1ec17baf35de66e98e6c7328
/var/lib/docker/aufs/layers/9646360df378b88eae6f1d6288439eebd9647d5b9e8a471840d4a9d6ed5d92a4
/var/lib/docker/aufs/layers/cf9fd1c4a64baa39b6d6d9dac048ad2fff3c3fe13924b07377e767eed230ba9f
/var/lib/docker/aufs/layers/f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579
/var/lib/docker/aufs/layers/23ce5a473b101d85f0e9465debe5a0f3b8a2079b99528a797b02052d06bc11d8
/var/lib/docker/image/aufs/layerdb/sha256/d1c659b8e3d0e893e95c8eedc755adcb91a1c2022e1090376b451f7206f9b1c0/cache-id
$ sudo cat /var/lib/docker/image/aufs/layerdb/sha256/d1c659b8e3d0e893e95c8eedc755adcb91a1c2022e1090376b451f7206f9b1c0/diff
sha256:b5185949ba02a6e065079660b0536672c9691fb0e0cb1fd912b2c7b29c91d625
$ docker-find sha256:b5185949ba02a6e065079660b0536672c9691fb0e0cb1fd912b2c7b29c91d625
+ docker=/var/lib/docker
+ sudo find /var/lib/docker '(' -path '/var/lib/docker/aufs/diff/*' -o -path '/var/lib/docker/aufs/mnt/*' ')' -prune -o -print
+ grep sha256:b5185949ba02a6e065079660b0536672c9691fb0e0cb1fd912b2c7b29c91d625
+ sudo find /var/lib/docker '(' -path '/var/lib/docker/aufs/diff/*' -o -path '/var/lib/docker/aufs/mnt/*' ')' -prune -o -type f -print0
+ sudo xargs -0 -P20 grep -l sha256:b5185949ba02a6e065079660b0536672c9691fb0e0cb1fd912b2c7b29c91d625
/var/lib/docker/image/aufs/layerdb/sha256/d1c659b8e3d0e893e95c8eedc755adcb91a1c2022e1090376b451f7206f9b1c0/diff
```
+```
# docker --version
Docker version 1.10.3, build 99b71ce
# docker info
Containers: 3
Running: 0
Paused: 0
Stopped: 3
Images: 29
Server Version: 1.10.3
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 99
Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.13.0-83-generic
Operating System: |unknown|
OSType: linux
Architecture: x86_64
CPUs: 24
Total Memory: 125.9 GiB
Name: dev34-devc
ID: VKMX:YMJ2:3NGV:5J6I:5RYM:AVBK:QPOZ:ODYE:VQ2D:AF2J:2LEM:TKTE
WARNING: No swap limit support
```
I should also show that docker lists no containers, volumes, or images:
```
$ docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker volume ls
DRIVER VOLUME NAME
```
strange; especially because of;
```
Containers: 3
Running: 0
Paused: 0
Stopped: 3
Images: 29
```
which doesn't match the output of `docker images` / `docker ps`.
What operating system are you running on?
```
Operating System: |unknown|
```
@tonistiigi any idea?
That was afterward. I guess some processes kicked off in the meantime.
The state I'm referring to (I have now) is:
```
$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
```
And I still have:
```
$ sudo du -hs /var/lib/docker/aufs/diff/a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
296M /var/lib/docker/aufs/diff/a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
```
We're on Ubuntu Lucid with an upgraded kernel =/
```
$ uname -a
Linux dev34-devc 3.13.0-83-generic #127-Ubuntu SMP Fri Mar 11 00:25:37 UTC 2016 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 10.04.1 LTS
Release: 10.04
Codename: lucid
```
It seems an interesting issue.
is it possible to have way to reproduce it ? @bukzor
Surely it's possible, but I don't know how.
Please try running the below script on one of your active docker hosts and see what's left.
In our case, there's always plenty of diffs left behind.
``` #!bash
#!/bin/bash
set -eu
echo "WARNING:: This will stop ALL docker processes and remove ALL docker images."
read -p "Continue (y/n)? "
if [ "$REPLY" != "y" ]; then
echo "Aborting."
exit 1
fi
xdocker() { exec xargs -P10 -r -n1 --verbose docker "$@"; }
set -x
# remove containers
docker ps -q | xdocker stop
docker ps -aq | xdocker rm
# remove tags
docker images | sed 1d | grep -v '^|none|' | col 1 2 | sed 's/ /:/' | xdocker rmi
# remove images
docker images -q | xdocker rmi
docker images -aq | xdocker rmi
# remove volumes
docker volume
ls -q | xdocker volume rm
```
One possible way I see this happening is that if there are errors on
aufs unmounting. For example, if there are EBUSY errors then probably
the image configuration has already been deleted before.
@bukzor
Would be very interesting if there was a reproducer that would start
from an empty graph directory, pull/run images and get it into a state
where it doesn't fully clean up after running your script.
That would be interesting, but sounds like a full day's work.
I can't commit to that.
Here's some more data regarding the (arbitrarily selected) troublesome diff above, `a800`.
``` #!sh
$ docker-find a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea | sudo xargs -n1 wc -l | sort -rn
+ sudo find /nail/var/lib/docker '(' -path
'/nail/var/lib/docker/aufs/diff/*' -o -path
'/nail/var/lib/docker/aufs/mnt/*' ')' -prune -o -print
+ grep a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
+ sudo find /nail/var/lib/docker '(' -path
'/nail/var/lib/docker/aufs/diff/*' -o -path
'/nail/var/lib/docker/aufs/mnt/*' ')' -prune -o -type f -print0
+ sudo xargs -0 -P20 grep -l
a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
15 /nail/var/lib/docker/aufs/layers/f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579
14 /nail/var/lib/docker/aufs/layers/f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579-init
13 /nail/var/lib/docker/aufs/layers/993e4988c510ec3ab4f6d139740a059df40585576f8196817e573a9684554c5c
12 /nail/var/lib/docker/aufs/layers/cf9fd1c4a64baa39b6d6d9dac048ad2fff3c3fe13924b07377e767eed230ba9f
11 /nail/var/lib/docker/aufs/layers/4e6afb958b5ee6dea6d1a886d19fc9c780d4ecc4baeebfbde31f9bb97732d10d
10 /nail/var/lib/docker/aufs/layers/23ce5a473b101d85f0e9465debe5a0f3b8a2079b99528a797b02052d06bc11d8
9 /nail/var/lib/docker/aufs/layers/95e68d59a8704f2bb52cc1306ca910ddb7af8956eb7c57970fcf7d8b3d9baddb
8 /nail/var/lib/docker/aufs/layers/ac6b8ff4c0e7b91230ebf4c1caf16f06c1fdceff6111fd58f4ea50fc2dd5050b
7 /nail/var/lib/docker/aufs/layers/fd895b6f56aedf09c48dba97931a34cea863a21175450c31b6ceadde03f7b3da
6 /nail/var/lib/docker/aufs/layers/d5bbef5adf2efb6f15d4f96c4bee21beb955255d1ec17baf35de66e98e6c7328
5 /nail/var/lib/docker/aufs/layers/9646360df378b88eae6f1d6288439eebd9647d5b9e8a471840d4a9d6ed5d92a4
4 /nail/var/lib/docker/aufs/layers/a8001a0e9515cbbda89a54120a89bfd9a3d0304c8d2812401aba33d22a2358ea
0 /nail/var/lib/docker/image/aufs/layerdb/sha256/d1c659b8e3d0e893e95c8eedc755adcb91a1c2022e1090376b451f7206f9b1c0/cache-id
```
So we see there's a chain of child layers, with `f3286009193` as the tip.
```
$ docker-find f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579'$'
+ sudo find /nail/var/lib/docker '(' -path '/nail/var/lib/docker/aufs/diff/*' -o -path '/nail/var/lib/docker/aufs/mnt/*' ')' -prune -o -print
+ grep --color 'f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579$'
/nail/var/lib/docker/aufs/layers/f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579
+ sudo find /nail/var/lib/docker '(' -path '/nail/var/lib/docker/aufs/diff/*' -o -path '/nail/var/lib/docker/aufs/mnt/*' ')' -prune -o -type f -print0
+ sudo xargs -0 -P20 grep --color -l 'f3286009193f95ab95a16b2561331db06803ac536cea921d9aa64e1564046579$'
/nail/var/lib/docker/image/aufs/layerdb/mounts/eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e/mount-id
```
So that layer was used in mount `eb809c0321`. I don't find any references to that mount anywhere:
```
$ docker-find eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e
+ sudo find /nail/var/lib/docker '(' -path '/nail/var/lib/docker/aufs/diff/*' -o -path '/nail/var/lib/docker/aufs/mnt/*' ')' -prune -o -print
+ grep --color eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e
/nail/var/lib/docker/image/aufs/layerdb/mounts/eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e
/nail/var/lib/docker/image/aufs/layerdb/mounts/eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e/mount-id
/nail/var/lib/docker/image/aufs/layerdb/mounts/eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e/init-id
/nail/var/lib/docker/image/aufs/layerdb/mounts/eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e/parent
+
sudo find /nail/var/lib/docker '(' -path
'/nail/var/lib/docker/aufs/diff/*' -o -path
'/nail/var/lib/docker/aufs/mnt/*' ')' -prune -o -type f -print0
+ sudo xargs -0 -P20 grep --color -l eb809c0321a2501e61763333bc0dfb33ea0539c15957587f5de003ad21b8275e
```
Is there any way to find what container that mount was used for?
The doc only says the mount ID is no longer equal to the container ID, which isn't very helpful.
https://docs.docker.com/engine/userguide/storagedriver/aufs-driver/
@bukzor `eb809c0321` is the container id. What docs mean is that aufs id (`f3286009193f` in your case) is not container id.
/cc @dmcgowan as well
@tonistiigi OK.
Then obviously the mount has outlived its container.
At what point in the container lifecycle is the mount cleaned up?
Is this the temporary writable aufs for running/stopped containers?
@bukzor (rw) mount is deleted on container deletion. Unmount happens on container process stop. Diff folders are a place where the individual layer contents are stored, it doesn't matter is the layer is mounted or not.
@bukzor The link between the aufs id and container
id can be found at `image/aufs/layerdb/mounts/|container-id|/mount-id`.
From just knowing an aufs id the easiest way to find the container id is to grep the `image/aufs/layerdb` directory for it. If nothing is found, then the cleanup
was not completed cleanly.
Running into similar issue.
We're running daily CI in the docker daemon server.
/var/lib/docker/aufs/diff takes quite a mount of disk capacity, which it
shouldn't be.
Still `2gb` in `aufs/diff` after trying everything reasonable suggested
here or in related threads (including @bukzor's bash script above).
Short of a proper fix, is there any straightforward
way of removing the leftover mounts without removing all other images
at the same time? (If no containers are running currently, I guess there
should be no mounts, right?)
I am experiencing the same issue. I am using this machine to test a lot
of containers, then commit/delete. My /var/lib/docker/aufs directory is
currently 7.9G heavy. I'm going to have to move this directory
to another mount point, because storage on this one is limited. :(
```
# du -sh /var/lib/docker/aufs/diff/
1.9T /var/lib/docker/aufs/diff/
```
@mcallaway Everything in `aufs/diff` is going to be fs writes performed
in a container.
I have the same issue. All containers which I have are in running state,
but there are lots of aufs diff directories which don't relate to these
containers and relate to old removed containers. I can remove them
manually, but it is not an option. There should be a reason for such a
behavior.
I use k8s 1.3.5 and docker 1.12.
Running of the `docker run --rm -v
/var/run/docker.sock:/var/run/docker.sock -v /etc:/etc
spotify/docker-gc` helped.
I have the same issue. I'm using Gitlab CI with dind (docker in docker).
IMHO when image in the registry was updated within the same tag
and it was pulled, then related container was restarted, old container
and image are not GCed unless you run `spotify/docker-gc`.
Can someone else confirm this?
@kayrus correct, docker will not automatically assume that an "untagged"
image should also be _removed_. Containers could still be using that
image, and you can still start new containers from that image
(referencing it by its ID). You can remove "dangling" images using
`docker rmi $(docker images -qa -f dangling=true)`. Also, docker 1.13
will get data management commands (see https://github.com/docker/docker/pull/26108), which allow you to more easily cleanup unused images, containers, etc.
@thaJeztah does `/var/lib/docker/aufs/diff/` actually contain the "untagged" images?
@kayrus yes they are part of the images (tagged, and untagged)
getting a similar issue, no containers/images/volumes, ~13Gb of diffs
```
$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.12.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 1030
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor
Kernel Version: 3.13.0-32-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.861 GiB
Name: gitrunner
ID: GSAW:6X5Z:SHHU:NZIM:O76D:P5OE:7OZG:UFGQ:BOAJ:HJFM:5G6W:5APP
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
```
```
$ docker volume ls
DRIVER VOLUME NAME
$ docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$
```
```
$ df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/mapper/gitrunner--docker-lib--docker 18G 15G 2.6G 85% /var/lib/docker
```
```
/var/lib/docker# sudo du -sm aufs/*
13782 aufs/diff
5 aufs/layers
5 aufs/mnt
```
``` shell
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 1.12.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: xfs
Dirs: 1122
```
Same issue here. I understand 1.13 may get data management commands but in the meantime, I just want to safely delete the contents of this directory without killing Docker.
This is relatively blocking at this point.
Same here. Still no official solution?
I've brought this up a few different
times in (Docker Community) Slack. Each time a handful of people run
through a list of garbage collection scripts/cmds I should run as a
solution.
While those
have helped (read: not solved - space is still creeping towards full) in
the interim, I think we can all agree that's not the ideal long term
fix.
@jadametz 1.13 has `docker system prune`.
Beyond that, I'm not sure how else Docker can help (open to suggestion).
The images aren't just getting to the system on their own, but rather
through pulls, builds, etc.
In terms of actual orphaned layers (no images on the system referencing
them), we'll need to address that separately.
I have exactly the same issue!
```docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.12.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 2501
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor
Kernel Version: 3.13.0-96-generic
Operating System: Ubuntu 14.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 14.69 GiB
Name: ip-172-31-45-4
ID: R5WV:BXU5:AV6T:GZUK:SAEA:6E74:PRSO:NQOH:EPMQ:W6UT:5DU4:LE64
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
```
No images, containers or volumes. 42Gb in aufs/diff
Anything to help clear this directory safely would be very useful! Tried everything in this thread without any success. Thanks.
@adamdry only third-party script: https://github.com/docker/docker/issues/22207#issuecomment-252560212
Thanks
@kayrus I did indeed try that and it increased my total disk usage
slightly and didn't appear to do anything to the aufs/diff directory.
I also tried `docker system prune` which didn't run. And I tried `docker
rmi $(docker images -qa -f dangling=true)` which didn't find any images
to remove.
For
anyone interested I'm now using this to clean down all containers,
images, volumes and old aufs:
`### FYI I am a Docker noob so I don't know if this causes any
underlying issues but it does work for me - use at your own risk ###`
Lots of inspiration taken from here: http://stackoverflow.com/questions/30984569/error-error-creating-aufs-mount-to-when-building-dockerfile
```
docker rm -f $(docker ps -a -q) || docker rmi -f $(docker images -q) || docker rmi -f $(docker images -a -q)
service docker stop
rm -rf /var/lib/docker/aufs
rm -rf /var/lib/docker/image/aufs
rm -f /var/lib/docker/linkgraph.db
service docker start
```
@adamdry Best to not use `-f` when doing rm/rmi as it will hide errors in removal.
I do consider the current situation... where `-f` hides an error and then we are left with some left-over state that is completely invisible to the user... as a bug.
I'm also seeing this on a completely new and unsurprising installation:
```
root@builder:/var/lib/docker# docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.12.4
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 63
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: overlay host null bridge
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options:
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 3.625 GiB
Name: builder
ID: 2WXZ:BT74:G2FH:W7XD:VVXM:74YS:EA3A:ZQUK:LPID:WYKF:HDWC:UKMJ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
Insecure Registries:
127.0.0.0/8
root@builder:/var/lib/docker# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@builder:/var/lib/docker# docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
root@builder:/var/lib/docker# du -hd2
4.0K ./swarm
6.0M ./image/aufs
6.0M ./image
4.0K ./trust
28K ./volumes
4.0K ./containers
276K ./aufs/layers
292K ./aufs/mnt
1.5G ./aufs/diff |-------------------------
1.5G ./aufs
4.0K ./tmp
72K ./network/files
76K ./network
1.5G .
root@builder:/var/lib/docker#
```
@robhaswell Seeing as it's a new install, do you want to try this?
https://github.com/docker/docker/issues/22207#issuecomment-266784433
@adamdry I've already deleted `/var/lib/docker/aufs` as it was blocking
my work. What do you expect your instructions to achieve? If they stop
the problem from happening again in the future, then I can try to recreate the issue and try your instructions. However if the purpose is just to free up the space then I've already achieved that.
@robhaswell Yeah it was to free up disk space but I had follow up issues when trying to rebuild my images but by following
all the steps in that script it resolved those issues.
During build, if the build process is interrupted during layer build
process (which also contains a blob to be copied), followed by stopping
the container, it leaves behind data in /var/lib/docker/aufs/diff/. It
showed up a dangling image. Cleaning it up too didn't release the space.
Is it possible to include it as a part of docker system prune? Only
deleting the blob data inside this folder frees up the space, which I am not sure will cause any issue or not.
Docker version : 1.13.0-rc1
| During build, if the build process is interrupted during
layer build process (which also contains a blob to be copied), followed
by stopping the container, it leaves behind data
This could also be the cause of my problems - I interrupt a lot of
builds.
During docker pull, observed the following two cases:
1. if the process is interrupted when it says download (which downloads
the image layer in /var/lib/docker/tmp/) , it cleans up all the data in
that folder
2. If the process is interrupted when it says extracting(which i suppose
is extracting the layer from tmp to /var/lib/docker/aufs/diff/), it
cleans up the tmp and diff blob data both.
During image build process,
1. On interrupting the process when "Sending build context to docker daemon"
( which copies blob data in my case in /var/lib/docker/tmp/), it
remains there forever and cannot be cleaned by any command except
manually deleting it. I am not sure how the apt get updates in image are
handled.
2. While the layer is being built which contains a blob data, say a
large software setup, if the process is interrupted, the docker
container keeps working on the image. In my case only 1 layer blob
data, which is already available in tmp folder, makes up the whole
image. But, if the container is stopped using the docker stop command, two cases happen:
a. if the mount process is still happening, it will leave behind data in tmp and diff folder.
b. If the data copied in diff folder, it will remove the data
from tmp folder and leave data in diff folder and maybe mount folder.
We have an automated build process, which needs a control to stop any
build process gracefully. Recently, a process got killed by kernel due
to out of memory error on a machine which was of low configuration.
If one image is to be built of 2 layers, 1 layer is built and 2nd is interrupted, Docker system prune seems to clean up the data for the container of layer which was interrupted and container stopped. But it doesn't
clean up the data of the previous layers in case of interrupt. Also, it
didn't reflect the total disk space claimed. Ran these tests on AWS,
ubuntu 14.04, x86_64 bit system with aufs filesystem. Ran the docker prune test with docker 1.13.0 rc3 and docker 1.12
@thaJeztah
Please let me know if i am misinterpreting anything.
I opened an issue for the `/var/lib/docker/tmp` files not being cleaned up; https://github.com/docker/docker/issues/29486
|
Docker system prune seems to clean up the data for the container of
layer which was interrupted and container stopped. But it doesn't clean
up the data of the previous layers in case of interrupt.
I tried to reproduce that situation, but wasn't able to see that with a
simple case;
Start with a clean install empty `/var/lib/docker`, create a big file for
testing, and a Dockerfile;
```bash
mkdir repro || cd repro
fallocate -l 300M bigfile
```
```Dockerfile
cat | Dockerfile ||EOF
FROM scratch
COPY ./bigfile /
COPY ./bigfile /again/
COPY ./bigfile /and-again/
EOF
```
start `docker build`, and cancel while building, but _after_ the build
context has been sent;
```bash
docker build -t stopme .
Sending build context to Docker daemon 314.6 MB
Step 1/4 : FROM scratch
---|
Step 2/4 : COPY ./bigfile /
---| 28eb6d7b0920
Removing intermediate container 98876b1673bf
Step 3/4 : COPY ./bigfile /again/
^C
```
check content of `/var/lib/docker/aufs/`
```bash
du -h /var/lib/docker/aufs/
301M /var/lib/docker/aufs/diff/9127644c356579741348f7f11f50c50c9a40e0120682782dab55614189e82917
301M /var/lib/docker/aufs/diff/81fd6b2c0cf9a28026cf8982331016a6cd62b7df5a3cf99182e7e09fe0d2f084/again
301M /var/lib/docker/aufs/diff/81fd6b2c0cf9a28026cf8982331016a6cd62b7df5a3cf99182e7e09fe0d2f084
601M /var/lib/docker/aufs/diff
8.0K /var/lib/docker/aufs/layers
4.0K /var/lib/docker/aufs/mnt/9127644c356579741348f7f11f50c50c9a40e0120682782dab55614189e82917
4.0K /var/lib/docker/aufs/mnt/81fd6b2c0cf9a28026cf8982331016a6cd62b7df5a3cf99182e7e09fe0d2f084
4.0K /var/lib/docker/aufs/mnt/b6ffb1d5ece015ed4d3cf847cdc50121c70dc1311e42a8f76ae8e35fa5250ad3-init
16K /var/lib/docker/aufs/mnt
601M /var/lib/docker/aufs/
```
run the `docker system prune` command to clean up images, containers;
```
docker system prune -a
WARNING! This will remove:
- all stopped containers
- all volumes not used by at least one container
- all networks not used by at least one container
- all images without at least one container associated to them
Are you sure you want to continue? [y/N] y
Deleted Images:
deleted: sha256:253b2968c0b9daaa81a58f2a04e4bc37f1dbf958e565a42094b92e3a02c7b115
deleted: sha256:cad1de5fd349865ae10bfaa820bea3a9a9f000482571a987c8b2b69d7aa1c997
deleted: sha256:28eb6d7b09201d58c8a0e2b861712701cf522f4844cf80e61b4aa4478118c5ab
deleted: sha256:3cda5a28d6953622d6a363bfaa3b6dbda57b789e745c90e039d9fc8a729740db
Total reclaimed space: 629.1 MB
```
check content of `/var/lib/docker/aufs/`
```bash
du -h /var/lib/docker/aufs/
4.0K /var/lib/docker/aufs/diff
4.0K /var/lib/docker/aufs/layers
4.0K /var/lib/docker/aufs/mnt/b6ffb1d5ece015ed4d3cf847cdc50121c70dc1311e42a8f76ae8e35fa5250ad3-init
8.0K /var/lib/docker/aufs/mnt
20K /var/lib/docker/aufs/
```
I do see that the `-init` mount is left behind, I'll check if we can solve
that (although it's just an empty directory)
The only difference in the dockerfile I had used was ( to create different layers)
FROM scratch
COPY ["./bigfile", "randomNoFile1", /]
COPY ["./bigfile", "randomNoFile2", /]
EOF
I am not sure if it makes a difference.
No, the problem isn't about the empty init folders. In my case, it was te blob. However, i can recheck it on monday and update.
Also, was using a 5GB file, created it by reading bytes from dev urandom.
In your case, the same file is added 2 times. Would that create single layer and mount the 2nd layer from it or would it be 2 separate layers? In my case, its always 2 separate layers.
@thaJeztah
Thank you for such a quick response on the issue. Addition of this feature would be of great help!
@monikakatiyar16 I tried to reproduce this as well with canceling
the build multiple times during both `ADD` and `RUN` commands but
couldn't get anything to leak to `aufs/diff` after deletion. I couldn't
quite understand what container you are stopping because containers
should not be running during
`ADD/COPY` operations. If you can put together a reproducer that we
could run that would be greatly appreciated.
Its possible that I could be doing something wrong. Since I am
travelling on weekend, I will reproduce it and update all the required
info here on Monday.
@tonistiigi @thaJeztah
I feel you are right. There are actually no containers that are listed as active and running. Instead there are dead containers.
Docker system prune didn't work in my case, might be since the process
didn't get killed with Ctrl+C. Instead, it kept running at the
background. In my case, that would be the reason, it couldn't remove
those blobs.
When I interrupt the process using Ctrl+C, the build process gets
killed, but a process for docker-untar remains alive in the background, which keeps working on building the image. (Note: /var/lib/docker is soft linked to /home/lib/docker to use the EBS volumes for large data on AWS)
`root 12700 10781 7 11:43 ? 00:00:04 docker-untar /home/lib/docker/aufs/mnt/d446d4f8a7dbae162e7578af0d33ac38a63b4892905aa86a8d131c1e75e2828c`
I have attached the script I had been using for creating large files and building the image (gc_maxpush_pull.sh)
Also attached the behaviour of build process for building an image-interrupting it with Ctrl+C (DockerBuild_WOProcessKill) and building image -interrupting it with Ctrl+C - killing the process (DockerBuild_WithProcessKill)
Using the commands -
To create large file : `./gc_maxpush_pull.sh 1 5gblayer 0 512 1`
To build images : `./gc_maxpush_pull.sh 1 5gblayer 1 512 1`
[DockerBuild.zip](https://github.com/docker/docker/files/660942/DockerBuild.zip)
Steps to replicate :
1. Create a large file of 5GB
2. Start the build process and interrupt it only after Sending build
context is over and it's actually copying the blob.
3. It completes building the image after a while and shows it up in
docker images (as in case 1 attached by me -
DockerBuild_WOProcessKill)
4. If the process is killed, it takes a while and leaves the blob data
in /diff (which it should on killing process abruptly as attached in
file - DockerBuild_WithProcessKill)
If what I am assuming is correct, then this might not be an issue with
docker prune, instead with killing of docker build that is somehow not
working for me.
Is there a graceful way of interrupting or stopping the build image
process, which also takes care of cleaning up the copied data (as handled in docker pull)?
Previously, I was not killing the process. I am also curious what docker-untar does and why it mounts it to /mnt and /diff folders both and later clean out /mnt folder?
Tested this with Docker version 1.12.5, build 7392c3b on AWS
docker info
Containers: 2
Running: 0
Paused: 0
Stopped: 2
Images: 0
Server Version: 1.12.5
Storage Driver: aufs
Root Dir: /home/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 4
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: overlay bridge null host
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor
Kernel Version: 3.13.0-105-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.859 GiB
Name: master
ID: 2NQU:D2C5:5WPL:IIDR:P6FO:OAG7:GHW6:ZJMQ:VDHI:B5CI:XFZJ:ZSZM
Docker Root Dir: /home/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
127.0.0.0/8
@monikakatiyar16 When I manually kill `untar` process during build I get
`Error processing tar file(signal: killed):` in the build output.
Leaving behind the container
in `docker ps -a` is the correct behavior, same thing happens in any
build error and lets you debug the problems that caused the build to
fail. I have no problem with deleting this container though and if I do
that all data in `/var/lib/docker/aufs` is cleaned up as well.
@tonistiigi Yes you are correct. I was able to delete the volume
associated with the container and it cleaned up everything, after
killing the docker-untar process. Docker system prune also works in this
case.
The actual issue which left over volumes was the case, when without
killing the docker-untar process, i tried removing the docker container
along with the volumes - which gave the following error :
`docker rm -v -f $(docker ps -a -q)`
`Error response from daemon: Driver aufs failed to remove root
filesystem
97931bf059a0ec219efd3f762dbb173cf9372761ff95746358c08e2b61f7ce79: rename
/home/lib/docker/aufs/diff/359d27c5b608c9dda1170d1e34e5d6c5d90aa2e94826257f210b1442317fad70
/home/lib/docker/aufs/diff/359d27c5b608c9dda1170d1e34e5d6c5d90aa2e94826257f210b1442317fad70-removing:
device or resource busy`
Daemon logs:
`Error removing mounted layer
78fb899aab981557bc2ee48e9738ff4c2fcf2d10a1984a62a77eefe980c68d4a: rename
/home/lib/docker/aufs/diff/d2605125ef072de79dc948f678aa94dd6dde562f51a4c0bd08a210d5b2eba5ec /home/lib/docker/aufs/diff/d2605125ef072de79dc948f678aa94dd6dde562f51a4c0bd08a210d5b2eba5ec-removing: device or resource busy
ERRO[0956] Handler for DELETE /v1.25/containers/78fb899aab98 returned error: Driver aufs failed to remove root filesystem 78fb899aab981557bc2ee48e9738ff4c2fcf2d10a1984a62a77eefe980c68d4a:
rename
/home/lib/docker/aufs/diff/d2605125ef072de79dc948f678aa94dd6dde562f51a4c0bd08a210d5b2eba5ec
/home/lib/docker/aufs/diff/d2605125ef072de79dc948f678aa94dd6dde562f51a4c0bd08a210d5b2eba5ec-removing:
device or resource busy
ERRO[1028] Error unmounting container
78fb899aab981557bc2ee48e9738ff4c2fcf2d10a1984a62a77eefe980c68d4a: no
such file or directory`
It seems that the order right now to be followed to interrupt a docker
build is :
`Interrupt docker build | Kill docker untar process | remove container
and volume : docker rm -v -f $(docker ps -a -q)`
For `docker v1.13.0-rc4`, it can be `Interrupt docker build | Kill
docker untar process | docker system prune
-a`
This seems to work perfectly. There are no issues of cleanup, instead
the only issue is the docker-untar process not being killed along with
the docker-build process.
I will search/update/log a new issue for graceful interrupt of docker
build that also stops docker-untar process along with it.
(Verified this with docker
v1.12.5 and v1.13.0-rc4)
Update : On killing docker-untar while Sending build context to docker
daemon, it gives an error in build : `Error response from daemon: Error
processing tar file(signal: terminated)` , but during layer copy it
doesn't (for me)
Thanks for being so patient and for giving your time!
I'm seeing `/var/lib/docker/aufs` consistently increase in size on a docker swarm mode worker. This thing
is mostly autonomous being managed by the swarm manager and very little
manual container creation aside from some maintenance commands here and
there.
I do run `docker exec` on service containers; not sure if that may be a cause.
My workaround to get this resolved in my case was to start up another worker, set the full node to `--availability=drain` and manually move over a couple of volume mounts.
```
ubuntu@ip-172-31-18-156:~$ docker --version
Docker version 1.12.3, build 6b644ec
```
This has hit our CI server for ages. This needs to be fixed.
@orf thanks
Same issue here. Neither container, volumes and image removing, nore Docker 1.13 cleanup commands have any effect.
I also confirm I did some image build cancels. Maybe that leaves folders that can't be reacher either.
I'll use the good old rm method for now, but this is clearly a bug.
Files in the /var/lib/docker/aufs/diff fills up 100% space for /dev/sda1 filesystem of 30G
root@Ubuntu:/var/lib/docker/aufs/diff# df -h
Filesystem Size Used Avail Use% Mounted on
udev 14G 0 14G 0% /dev
tmpfs 2.8G 273M 2.5G 10% /run
**/dev/sda1 29G 29G 0 100% /**
tmpfs 14G 0 14G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 14G 0 14G 0% /sys/fs/cgroup
/dev/sdb1 197G 60M 187G 1% /mnt
tmpfs 2.8G 0 2.8G 0% /run/user/1000
du -h -d 1 /var/lib/docker/aufs/diff | grep '[0-9]G\|'
shows
4.1G /var/lib/docker/aufs/diff/a0cde42cbea362bbb2a73ffbf30059bcce7ef0256d1d7e186264f915d15
14G /var/lib/docker/aufs/diff/59aee33d8a607b5315ce103cd99f17b4dfdec73c9a2f3bb2afc7d02bfae
20G /var/lib/docker/aufs/diff
Also tried, **docker system prune**, that did not help.
Has anyone found a solution for this ongoing issue of super large files in diff before this bug is fixed in the code?
Yes, the method has already been given, but here is a apocalypse snippet that just destroy everything I put in place here at work (except local folders for the volumes). To put in the bashrc or another bash config file.
```
alias docker-full-cleanup='func_full-cleanup-docker'
func_full-cleanup-docker() {
echo "WARN: This will remove everything from docker: volumes, containers and images. Will you dare? [y/N] "
read choice
if [ \( "$choice" == "y" \) -o \( "$choice" == "Y" \) ]
then
sudo echo "| sudo rights check [OK]"
sizea=`sudo du -sh /var/lib/docker/aufs`
echo "Stopping all running containers"
containers=`docker ps -a -q`
if [ -n "$containers" ]
then
docker stop $containers
fi
echo "Removing all docker images and containers"
docker system prune -f
echo "Stopping Docker daemon"
sudo service docker stop
echo "Removing all leftovers in /var/lib/docker (bug #22207)"
sudo rm -rf /var/lib/docker/aufs
sudo rm -rf /var/lib/docker/image/aufs
sudo rm -f /var/lib/docker/linkgraph.db
echo "Starting Docker daemon"
sudo service docker start
sizeb=`sudo du -sh /var/lib/docker/aufs`
echo "Size before full cleanup:"
echo " $sizea"
echo "Size after full cleanup:"
echo " $sizeb"
fi
}```
I ran rm -rf command to remove the files from the diff folder for now.
Probably will have to look into the script if the diff folder occupies
the entire dis space again.
Hope to see this issue fixed in the code, instead of work arounds.
Hi, I have same issue in docker 1.10.2, I'm running kubernetes. this is my docker version:
```
Containers: 7
Running: 0
Paused: 0
Stopped: 7
Images: 4
Server Version: 1.10.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 50
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 4.4.0-31-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.954 GiB
Name: ubuntu-k8s-03
ID: NT23:5Y7J:N2UM:NA2W:2FHE:FNAS:56HF:WFFF:N2FR:O4T4:WAHC:I3PO
Debug mode (server): true
File Descriptors: 10
Goroutines: 23
System Time: 2017-02-14T15:25:00.740998058+09:00
EventsListeners: 0
Init SHA1: 3e247d0d32543488f6e70fbb7c806203f3841d1b
Init Path: /usr/lib/docker/dockerinit
Docker Root Dir: /var/lib/docker
WARNING: No swap limit support
```
I'm trying to track all unused diff directory under `/var/lib/docker/aufs/diff` and `/var/lib/docker/aufs/mnt/` by analyzing layer files under `/var/lib/docker/image/aufs/imagedb`, here is the script I used:
https://gist.github.com/justlaputa/a50908d4c935f39c39811aa5fa9fba33
But I met problem when I stop and restart the docker daemon, seems I make some inconsistent status of docker:
/var/log/upstart/docker.log:
```
DEBU[0277] Cleaning up old shm/mqueue mounts: start.
DEBU[0277] Cleaning up old shm/mqueue mounts: done.
DEBU[0277] Clean shutdown succeeded
Waiting for /var/run/docker.sock
DEBU[0000] docker group found. gid: 999
DEBU[0000] Server created for HTTP on unix (/var/run/docker.sock)
DEBU[0000] Using default logging driver json-file
INFO[0000] [graphdriver] using prior storage driver "aufs"
DEBU[0000] Using graph driver aufs
INFO[0000] Graph migration to content-addressability took 0.00 seconds
DEBU[0000] Option DefaultDriver: bridge
DEBU[0000] Option DefaultNetwork: bridge
INFO[0000] Firewalld running: false
DEBU[0000] /sbin/iptables, [--wait -t nat -D PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL ! --dst 127.0.0.0/8 -j DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t nat -D OUTPUT -m addrtype --dst-type LOCAL -j DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t nat -D PREROUTING]
DEBU[0000] /sbin/iptables, [--wait -t nat -D OUTPUT]
DEBU[0000] /sbin/iptables, [--wait -t nat -F DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t nat -X DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t filter -F DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t filter -X DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t filter -F DOCKER-ISOLATION]
DEBU[0000] /sbin/iptables, [--wait -t filter -X DOCKER-ISOLATION]
DEBU[0000] /sbin/iptables, [--wait -t nat -n -L DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t nat -N DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t filter -n -L DOCKER]
DEBU[0000] /sbin/iptables, [--wait -t filter -n -L DOCKER-ISOLATION]
DEBU[0000] /sbin/iptables, [--wait -t filter -C DOCKER-ISOLATION -j RETURN]
DEBU[0000] /sbin/iptables, [--wait -I DOCKER-ISOLATION -j RETURN]
/var/run/docker.sock is up
DEBU[0000] Registering ipam driver: "default"
DEBU[0000] releasing IPv4 pools from network bridge (dcfcc71060f02440ae53da5ee0f083ca51c33a290565f1741f451754ae6b4257)
DEBU[0000] ReleaseAddress(LocalDefault/10.254.69.0/24, 10.254.69.1)
DEBU[0000] ReleasePool(LocalDefault/10.254.69.0/24)
DEBU[0000] Allocating IPv4 pools for network bridge (159d0a404ff6564b4fcfe633f0c8c123c0c0606d28ec3b110272650c5fc1bcb6)
DEBU[0000] RequestPool(LocalDefault, 10.254.69.1/24, , map[], false)
DEBU[0000] RequestAddress(LocalDefault/10.254.69.0/24, 10.254.69.1, map[RequestAddressType:com.docker.network.gateway])
DEBU[0000] /sbin/iptables, [--wait -t nat -C POSTROUTING -s 10.254.69.0/24 ! -o docker0 -j MASQUERADE]
DEBU[0000] /sbin/iptables, [--wait -t nat -C DOCKER -i docker0 -j RETURN]
DEBU[0000] /sbin/iptables, [--wait -t nat -I DOCKER -i docker0 -j RETURN]
DEBU[0000] /sbin/iptables, [--wait -D FORWARD -i docker0 -o docker0 -j DROP]
DEBU[0000] /sbin/iptables, [--wait -t filter -C FORWARD -i docker0 -o docker0 -j ACCEPT]
DEBU[0000] /sbin/iptables, [--wait -t filter -C FORWARD -i docker0 ! -o docker0 -j ACCEPT]
DEBU[0000] /sbin/iptables, [--wait -t filter -C FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT]
DEBU[0001] /sbin/iptables, [--wait -t nat -C PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]
DEBU[0001] /sbin/iptables, [--wait -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER]
DEBU[0001] /sbin/iptables, [--wait -t nat -C OUTPUT -m addrtype --dst-type LOCAL -j DOCKER ! --dst 127.0.0.0/8]
DEBU[0001] /sbin/iptables, [--wait -t nat -A OUTPUT -m addrtype --dst-type LOCAL -j DOCKER ! --dst 127.0.0.0/8]
DEBU[0001] /sbin/iptables, [--wait -t filter -C FORWARD -o docker0 -j DOCKER]
DEBU[0001] /sbin/iptables, [--wait -t filter -C FORWARD -o docker0 -j DOCKER]
DEBU[0001] /sbin/iptables, [--wait -t filter -C FORWARD -j DOCKER-ISOLATION]
DEBU[0001] /sbin/iptables, [--wait -D FORWARD -j DOCKER-ISOLATION]
DEBU[0001] /sbin/iptables, [--wait -I FORWARD -j DOCKER-ISOLATION]
WARN[0001] Your kernel does not support swap memory limit.
DEBU[0001] Cleaning up old shm/mqueue mounts: start.
DEBU[0001] Cleaning up old shm/mqueue mounts: done.
DEBU[0001] Loaded container 0790b33ec8e5345ac944d560263b8e13cb75f80dd82cd25753c7320bbcb2747c
DEBU[0001] Loaded container 0e36a6c9319e6b7ca4e5b5408e99d77d51b1f4e825248c039ba0260e628c483d
DEBU[0001] Loaded container 135fb2e8cad26d531435dcd19d454e41cf7aece289ddc7374b4c2a984f8b094a
DEBU[0001] Loaded container 2c28de46788ce96026ac8e61e99c145ec55517543e078a781e8ce6c8cddec973
DEBU[0001] Loaded container 35eb075b5815e621378eb8a7ff5ad8652819ec851eaa4f7baedb1383dfa51a57
DEBU[0001] Loaded container 6be37a301a8f52040adf811041c140408224b12599aa55155f8243066d2b0b69
DEBU[0001] Loaded container d98ac7f052fef31761b82ab6c717760428ad5734df4de038d80124ad5b5e8614
DEBU[0001] Starting container
2c28de46788ce96026ac8e61e99c145ec55517543e078a781e8ce6c8cddec973
ERRO[0001] Couldn't run auplink before unmount: exit status 22
ERRO[0001] error locating sandbox id
d4c538661db2edc23c79d7dddcf5c7a8886c9477737888a5fc2641bc5e66da8b:
sandbox d4c538661db2edc23c79d7dddcf5c7a8886c9477737888a5fc2641bc5e66da8b
not found
WARN[0001] failed to cleanup ipc mounts:
failed to umount
/var/lib/docker/containers/2c28de46788ce96026ac8e61e99c145ec55517543e078a781e8ce6c8cddec973/shm:
invalid argument
ERRO[0001] Failed to start container
2c28de46788ce96026ac8e61e99c145ec55517543e078a781e8ce6c8cddec973: error
creating aufs mount to /var/lib/docker/aufs/mnt/187b8026621da2add42330c9393a474fcd9af2e4567596d61bcd7a40c85f71da: invalid argument
INFO[0001] Daemon has completed initialization
INFO[0001] Docker daemon commit=c3959b1 execdriver=native-0.2 graphdriver=aufs version=1.10.2
DEBU[0001] Registering routers
DEBU[0001] Registering HEAD, /containers/{name:.*}/archive
```
and when I try to create new containers by `docker run`, it failed with message:
```
docker: Error response from daemon: error creating aufs mount to /var/lib/docker/aufs/mnt/f9609c0229baa2cdc6bc07c36970ef4f192431c1b1976766b3ea23d72c355df3-init: invalid argument.
See 'docker run --help'.
```
and the daemon log shows:
```
DEBU[0173] Calling POST /v1.22/containers/create
DEBU[0173] POST /v1.22/containers/create
DEBU[0173] form data: {"AttachStderr":false,"AttachStdin":false,"AttachStdout":false,"Cmd":["/hyperkube","kubelet","--api-servers=http://localhost:8080","--v=2","--address=0.0.0.0","--enable-server","--hostname-override=172.16.210.87","--config=/etc/kubernetes/manifests-multi","--cluster-dns=10.253.0.10","--cluster-domain=cluster.local","--allow_privileged=true"],"Domainname":"","Entrypoint":null,"Env":[],"HostConfig":{"Binds":["/sys:/sys:ro","/dev:/dev","/var/lib/docker/:/var/lib/docker:rw","/var/lib/kubelet/:/var/lib/kubelet:rw","/var/run:/var/run:rw","/etc/kubernetes/manifests-multi:/etc/kubernetes/manifests-multi:ro","/:/rootfs:ro"],"BlkioDeviceReadBps":null,"BlkioDeviceReadIOps":null,"BlkioDeviceWriteBps":null,"BlkioDeviceWriteIOps":null,"BlkioWeight":0,"BlkioWeightDevice":null,"CapAdd":null,"CapDrop":null,"CgroupParent":"","ConsoleSize":[0,0],"ContainerIDFile":"","CpuPeriod":0,"CpuQuota":0,"CpuShares":0,"CpusetCpus":"","CpusetMems":"","Devices":[],"Dns":[],"DnsOptions":[],"DnsSearch":[],"ExtraHosts":null,"GroupAdd":null,"IpcMode":"","Isolation":"","KernelMemory":0,"Links":null,"LogConfig":{"Config":{},"Type":""},"Memory":0,"MemoryReservation":0,"MemorySwap":0,"MemorySwappiness":-1,"NetworkMode":"host","OomKillDisable":false,"OomScoreAdj":0,"PidMode":"host","PidsLimit":0,"PortBindings":{},"Privileged":true,"PublishAllPorts":false,"ReadonlyRootfs":false,"RestartPolicy":{"MaximumRetryCount":0,"Name":"always"},"SecurityOpt":null,"ShmSize":0,"UTSMode":"","Ulimits":null,"VolumeDriver":"","VolumesFrom":null},"Hostname":"","Image":"gcr.io/google_containers/hyperkube:v1.1.8","Labels":{},"NetworkingConfig":{"EndpointsConfig":{}},"OnBuild":null,"OpenStdin":false,"StdinOnce":false,"StopSignal":"SIGTERM","Tty":false,"User":"","Volumes":{},"WorkingDir":""}
ERRO[0173] Couldn't run auplink before unmount: exit status 22
ERRO[0173] Clean up Error! Cannot destroy container 482957f3e4e92a0ba56d4787449daa5a8708f3b77efe0c603605f35d02057566:
nosuchcontainer: No such container:
482957f3e4e92a0ba56d4787449daa5a8708f3b77efe0c603605f35d02057566
ERRO[0173] Handler for POST /v1.22/containers/create returned error:
error creating aufs mount to
/var/lib/docker/aufs/mnt/f9609c0229baa2cdc6bc07c36970ef4f192431c1b1976766b3ea23d72c355df3-init:
invalid argument
```
does anyone know whether my approach is correct or not? and why the
problem happens after I delete those folders?
I opened #31012 to at least make sure we don't leak these dirs in any
circumstances.
We of course also need to look at the various causes of the `busy`
errors
This was biting
me as long as I can remember. I got pretty much the same results as
described above when I switched to `overlay2` driver some days ago and
nuked the aufs folder completely (`docker system df` says 1.5Gigs, `df`
says 15Gigs).
I had about 1T of diffs using storage. After restarting my docker daemon
- I recovered about 700GB. So I guess stopping the daemon prunes these?
Restarting does nothing for me, unfortunately.
Service restart did not help. This is a serious issue. Removing all
container/images does not remove those diffs.
Stopping the daemon would not prune these.
If you remove all containers and you still have `diff` dirs, then likely
you have some leaked rw layers.
We just encountered this issue. `/var/lib/docker/aufs/diff` took up 28G
and took our root filesystem to 100%, which caused our GitLab server to
stop responding. We're using docker for GitLab CI. To fix this, I used
some of the commands @sogetimaitral suggested above to delete the temp
files, and we're back up and running. I restarted the server and sent in
a new commit to trigger CI, and everything appears to be working just
as it should.
I'm definitely concerned this is going to happen again. What's the deal
here? Is this a docker bug that needs to be fixed?
1. Yes there is a bug (both that there are issues on removal and that
--force on rm ignores these issues)
2. Generally one should not be writing lots of data to the container fs
and instead use a volume (even a throw-away volume). A large diff dir
would indicate that there is significant amounts of data being written
to the container fs.
If you don't use "--force" on remove you would not run into this issue
(or at least you'd see you have a bunch of "dead" containers and know
how/what to clean up.).
I'm not manually using docker at all. We're using [gitlab-ci-multi-runner](https://gitlab.com/gitlab-org/gitlab-ci-multi-runner).
Could it be a bug on GitLab's end then?
It looks like (by default) it force-removes containers;
https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/blob/dbdbce2848530df299836768c8ea01e209a2fe40/executors/docker/executor_docker.go#L878.
Doing so can result in failures to remove the container being ignored,
and leading to the orphaned diffs.
Ok, then that tells me that this is a gitlab-ci-multi-runner bug. Is
that a correct interpretation? I'm happy to create an issue for them to
fix this.
It's a combination I guess; "force" remove makes it easier to handle
cleanups (i.e., cases where a container isn't stopped yet, etc), at the
same time (that's the "bug" @cpuguy83 mentioned), it can also hide
actual issues, such as docker failing to remove the containers
filesystem (which can have various reasons). With "force", the container
is removed in such cases. Without, the container is left around (but
marked "dead")
If the gitlab runner can function correctly without the force remove,
that'll probably be good to change (or make it configurable)
I am using [Drone](https://github.com/drone/drone) and have the same
issue. I didn't check the code how containers are removed, but i guess it force removes as well.
Could it be a Docker in Docker issue? I am starting Drone with docker-compose.
I decided to go ahead and submit a gitlab-ci-multi-runner issue just to loop the devs in: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/2304
Honestly we worked around this by running Spotify's docker gc with drone CI.
El El mar, mar. 28, 2017 a las 3:38 PM, Geoffrey Fairchild |
notifications@github.com| escribió:
| I decided to go ahead and submit a gitlab-ci-multi-runner issue just to
| loop the devs in:
| https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/2304
|
| —
| You are receiving this because you commented.
| Reply to this email directly, view it on GitHub
| |https://github.com/docker/docker/issues/22207#issuecomment-289926298|,
| or mute the thread
| |https://github.com/notifications/unsubscribe-auth/AC6Wz197zkjWWOlq1-JOibiQP-xJym9Eks5rqYvegaJpZM4IMGt2|
| .
|
@sedouard Thanks for
this tip! Running [docker-gc](https://github.com/spotify/docker-gc)
hourly from spotify solved the problem for me.
We are getting this issue running from Gitlab CI (not running in
docker), using commands to build images / run containers, (not Gitlab CI
Docker integration). We are not running any form of force removal,
simply `docker run --rm ...` and `docker rmi image:tag`
**EDIT**: sorry, actually the original problem is the same. The
difference is that running `spotify/docker-gc` does _not_ fix the
problem.
----
As you can see below, I have 0 images, 0 containers, nothing!
`docker system info` agrees with me, but mentions `Dirs: 38` for the
aufs storage.
That's suspicious! If you look at `/var/lib/docker/aufs/diff/`,
we see that there's actually 1.7 GB of data there, over 41 directories.
And that's my personal box, on the production server it's 19 GB.
How do we clean this? using `spotify/docker-gc` does not remove these.
``` shell
philippe@pv-desktop:~$ docker images -a
REPOSITORY TAG IMAGE ID CREATED
SIZE
philippe@pv-desktop:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
philippe@pv-desktop:~$ docker system info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 17.03.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 38
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-72-generic
Operating System: Ubuntu 16.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 31.34 GiB
Name: pv-desktop
ID: 2U5D:CRHS:RUQK:YSJX:ZTRS:HYMV:HO6Q:FDKE:R6PK:HMUN:2EOI:RUWO
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: silex
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
philippe@pv-desktop:~$ ls -alh /var/lib/docker/aufs/diff/
total 276K
drwxr-xr-x 40 root root 116K Apr 13 15:32 .
drwxr-xr-x 5 root root 4.0K Sep 18 2015 ..
drwxr-xr-x 4 root root 4.0K Jun 17 2016 005d00efb0ba949d627ad439aec8c268b5d55759f6e92e51d7828c12e3817147
drwxr-xr-x 8 root root 4.0K May 2 2016 0968e52874bbfaa938ffc869cef1c5b78e2d4f7a670e19ef47f713868b9bfbdf
drwxr-xr-x 4 root root 4.0K Jun 20 2016 188233e6dcc37e2308e69807ffd19aca3e61be367daae921f2bcb15a1d6237d0
drwxr-xr-x 6 root root 4.0K Jun 20 2016 188233e6dcc37e2308e69807ffd19aca3e61be367daae921f2bcb15a1d6237d0-init
drwxr-xr-x 21 root root 4.0K Apr 8 2016 250ecb97108a6d8a8c41f9d2eb61389a228c95f980575e95ee61f9e8629d5180
drwxr-xr-x 2 root root 4.0K Dec 22 2015 291f16f99d9b0bc05100e463dbc007ef816e0cf17b85d20cf51da5eb2b866810
drwxr-xr-x 2 root root 4.0K May 2 2016 3054baaa0b4a7b52da2d25170e9ce4865967f899bdf6d444b571e57be141b712
drwxr-xr-x 2 root root 4.0K Feb 5 2016 369aca82a5c05d17006b9dca3bf92d1de7d39d7cd908ed665ef181649525464e
drwxr-xr-x 3 root root 4.0K Jun 17 2016 3835a1d1dfe755d9d1ada6933a0ea7a4943caf8f3d96eb3d79c8de7ce25954d2
(...strip)
philippe@pv-desktop:~$ du -hs /var/lib/docker/aufs/diff/
1.7G /var/lib/docker/aufs/diff/
philippe@pv-desktop:~$ docker system prune -a
WARNING! This will remove:
- all stopped containers
- all volumes not used by at least one container
- all networks not used by at least one container
- all images without at least one container associated to them
Are you sure you want to continue? [y/N] y
Total reclaimed space: 0 B
```
Can I safely `rm -r /var/lib/docker/aufs` and restart the docker deamon?
Running `spotify/docker-gc` does not clean those orphans.
**EDIT**: thanks @CVTJNII!
Stopping the Docker daemon and erasing all of /var/lib/docker
will be safer. Erasing /var/lib/docker/aufs will cause you to lose
your images anyway so it's better to start with a clean /var/lib/docker
in my opinion. This is the "solution" I've been using for several
months for this problem now.
Starting with 17.06 there
should no longer be any *new* orphaned diffs.
Instead you may start seeing containers with the state `Dead`, this
happens if there was an error during removal that is non-recoverable and
may require an admin to deal with it.
In addition, removal is a bit more robust, and less prone to error due
to race conditions or failed unmounts.
@cpuguy83: great news, can you explain what the admin would need to do
if that happens?
@Silex It depends on the cause.
Typically what has happened is there is a `device or resource busy` error due to some mount being leaked into a container. If you are running something like cadvisor this is pretty much a guarantee as the instructions
say to mount the whole docker dir into the cadvisor container.
This *can* be tricky, you may have to stop the offending container(s)
and then remove the `dead` container.
If you are on a newer kernel (3.15+) it is unlikely that you would see
the issue anymore, though there still may be some edge case.
Docker version 17.06.0-ce, build 02c1d87
I tried remove all images, volumes, networks and containers but it not helped.
Also tried commands:
```
docker system prune -af
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v /etc:/etc:ro spotify/docker-gc
```
Still remain files:
```
root@Dark:/var/lib/docker/aufs# ls -la *
diff:
total 92
drwx------ 12 root root 45056 Jul 28 17:28 .
drwx------ 5 root root 4096 Jul 9 00:18 ..
drwxr-xr-x 4 root root 4096 Jul 10 01:35 78f8ecad2e94fedfb1ced425885fd80bb8721f9fd70715de2ce373986785b882
drwxr-xr-x 6 root root 4096 Jul 10 01:35 78f8ecad2e94fedfb1ced425885fd80bb8721f9fd70715de2ce373986785b882-init
drwxr-xr-x 5 root root 4096 Jul 10 01:35 7caa9688638ea9669bac451b155b65b121e99fcea8d675688f0c76678ba85ccd
drwxr-xr-x 6 root root 4096 Jul 10 01:35 7caa9688638ea9669bac451b155b65b121e99fcea8d675688f0c76678ba85ccd-init
drwxr-xr-x 4 root root 4096 Jul 12 14:45 b7b7770aae461af083e72e5e3232a62a90f934c83e38830d06365108e302e7ac
drwxr-xr-x 6 root root 4096 Jul 12 14:45 b7b7770aae461af083e72e5e3232a62a90f934c83e38830d06365108e302e7ac-init
drwxr-xr-x 4 root root 4096 Jul 10 01:35 d5752b27b341e17e730d3f4acbec04b10e41dc01ce6f9f98ff38208c0647f2e4
drwxr-xr-x 6 root root 4096 Jul 10 01:35 d5752b27b341e17e730d3f4acbec04b10e41dc01ce6f9f98ff38208c0647f2e4-init
drwxr-xr-x 6 root root 4096 Jul 10 01:35 e412d3c6f0f5f85e23d7a396d47c459f5d74378b474b27106ab9b82ea829dbfb
drwxr-xr-x 6 root root 4096 Jul 10 01:35 e412d3c6f0f5f85e23d7a396d47c459f5d74378b474b27106ab9b82ea829dbfb-init
layers:
total 52
drwx------ 2 root root 45056 Jul 28 17:28 .
drwx------ 5 root root 4096 Jul 9 00:18 ..
-rw-r--r-- 1 root root 0 Jul 10 01:35 78f8ecad2e94fedfb1ced425885fd80bb8721f9fd70715de2ce373986785b882
-rw-r--r-- 1 root root 0 Jul 10 01:35 78f8ecad2e94fedfb1ced425885fd80bb8721f9fd70715de2ce373986785b882-init
-rw-r--r-- 1 root root 0 Jul 10 01:35 7caa9688638ea9669bac451b155b65b121e99fcea8d675688f0c76678ba85ccd
-rw-r--r-- 1 root root 0 Jul 10 01:35 7caa9688638ea9669bac451b155b65b121e99fcea8d675688f0c76678ba85ccd-init
-rw-r--r-- 1 root root 0 Jul 12 14:45 b7b7770aae461af083e72e5e3232a62a90f934c83e38830d06365108e302e7ac
-rw-r--r-- 1 root root 0 Jul 12 14:45 b7b7770aae461af083e72e5e3232a62a90f934c83e38830d06365108e302e7ac-init
-rw-r--r-- 1 root root 0 Jul 10 01:35 d5752b27b341e17e730d3f4acbec04b10e41dc01ce6f9f98ff38208c0647f2e4
-rw-r--r-- 1 root root 0 Jul 10 01:35 d5752b27b341e17e730d3f4acbec04b10e41dc01ce6f9f98ff38208c0647f2e4-init
-rw-r--r-- 1 root root 0 Jul 10 01:35 e412d3c6f0f5f85e23d7a396d47c459f5d74378b474b27106ab9b82ea829dbfb
-rw-r--r-- 1 root root 0 Jul 10 01:35 e412d3c6f0f5f85e23d7a396d47c459f5d74378b474b27106ab9b82ea829dbfb-init
mnt:
total 92
drwx------ 12 root root 45056 Jul 28 17:28 .
drwx------ 5 root root 4096 Jul 9 00:18 ..
drwxr-xr-x 2 root root 4096 Jul 10 01:35 78f8ecad2e94fedfb1ced425885fd80bb8721f9fd70715de2ce373986785b882
drwxr-xr-x 2 root root 4096 Jul 10 01:35 78f8ecad2e94fedfb1ced425885fd80bb8721f9fd70715de2ce373986785b882-init
drwxr-xr-x 2 root root 4096 Jul 10 01:35 7caa9688638ea9669bac451b155b65b121e99fcea8d675688f0c76678ba85ccd
drwxr-xr-x 2 root root 4096 Jul 10 01:35 7caa9688638ea9669bac451b155b65b121e99fcea8d675688f0c76678ba85ccd-init
drwxr-xr-x 2 root root 4096 Jul 12 14:45 b7b7770aae461af083e72e5e3232a62a90f934c83e38830d06365108e302e7ac
drwxr-xr-x 2 root root 4096 Jul 12 14:45 b7b7770aae461af083e72e5e3232a62a90f934c83e38830d06365108e302e7ac-init
drwxr-xr-x 2 root root 4096 Jul 10 01:35 d5752b27b341e17e730d3f4acbec04b10e41dc01ce6f9f98ff38208c0647f2e4
drwxr-xr-x 2 root root 4096 Jul 10 01:35 d5752b27b341e17e730d3f4acbec04b10e41dc01ce6f9f98ff38208c0647f2e4-init
drwxr-xr-x 2 root root 4096 Jul 10 01:35 e412d3c6f0f5f85e23d7a396d47c459f5d74378b474b27106ab9b82ea829dbfb
drwxr-xr-x 2 root root 4096 Jul 10 01:35 e412d3c6f0f5f85e23d7a396d47c459f5d74378b474b27106ab9b82ea829dbfb-init
```
```
# docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 0 0 0B 0B
Containers 0
0 0B 0B
Local Volumes 0 0 0B
0B
```
How can it be deleted?
@haos616 try stopping all running containers first, and then run `docker
system prune -af`.
This did the trick for me.
Didn't work while I had a container running.
If it's an upgrade from a previous version of docker, it's possible
those diffs were generated / left behind by that version. Docker 17.06 won't remove a container if layers failed to be removed (when using --force); older versions did, which could lead to orphaned layers
@julian-pani I did so in the beginning but it does not help.
```
# docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 0 0 0B
0B
Containers 0 0 0B
0B
Local Volumes 0 0 0B
0B
```
@thaJeztah No. I cleaned the Docker one or two months ago. Then the
version was already 17.06. I used command `docker system prune -af`. It
removed everything.
Running https://github.com/spotify/docker-gc as a container worked for
me, but it went a step extra and deleted some of my required images too
:(
So I've put a small wrapper script as below to be safe
```
#!/bin/sh
docker images -q | /etc/docker-gc-exclude # Save all genuine images
as exclude
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v
/etc:/etc:ro spotify/docker-gc
```
thanks again to spotify
IIUC, the spotify script just calls `docker rm` and `docker rmi` - did
it actually remove orphaned diffs?
Just some feedback for the community, I've read through all of this and
none of the solutions actually seem to work consistently or reliably.
My "fix" was simply to double the amount of disc space on my AWS
instances. And I know all too well that's a crappy fix but it is the
best workaround I've found to Docker's bloated aufs. This really,
really needs to be fixed.
@fuzzygroup 17.06 should no longer create orphaned diffs, but it won't clean up the old ones yet.
I could cleanup with this script.
I don't see why it wouldn't work, but who knows.
Anyway it's working fine for me. It will delete all images, containers,
and volumes... As it should not run very often, I find it a minor side
effect. But it's up to you to use it or not.
https://gist.github.com/Karreg/84206b9711cbc6d0fbbe77a57f705979
https://stackoverflow.com/q/45798076/562769 seems to be related. I've
posted a quick fix.
FYI, still seeing this with `17.06.1-ce`
```
Containers: 20
Running: 0
Paused: 0
Stopped: 20
Images: 124
Server Version: 17.06.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 185
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
apparmor
Kernel Version: 4.4.0-83-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 7.796GiB
Name: gitlab-cirunner
ID: PWLR:R6HF:MK3Y:KN5A:AWRV:KHFY:F36D:WASF:7K7B:U7FY:2DJA:DBE2
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
```
`/var/lib/docker/aufs/diff` contains lots of directories with the `-init-removing` and `-removing` prefix:
```
ffd5477de24b0d9993724e40175185038a62250861516030a33280898243e742-init-removing
ffd5477de24b0d9993724e40175185038a62250861516030a33280898243e742-removing
ffd900de0634992e99c022a16775805dfd0ffd1f6c89fece7deb6b1a71c5e38c-init-removing
ffd900de0634992e99c022a16775805dfd0ffd1f6c89fece7deb6b1a71c5e38c-removing
```
| FYI, still seeing
this with 17.06.1-ce
Still seeing what, exactly?
There should not be any way that a diff dir can leak, though diff dirs
will still exist if they existed on upgrade, they'll still exist.
Still seeing orphaned diffs as far as I can tell. `docker system prune` didn't remove them, neither did `docker-gc`. Manually running `rm -rf /var/lib/docker/aufs/diff/*-removing` seems to be working.
Yes, docker will not clean up old orphaned dirs yet.
By old you mean those created from a previous version of docker with
this issue?
This is a fresh install of Docker we did about two weeks ago, those
orphans must have been created since then, so it seems that docker must
still be creating those orphans?
I mean, in the last half an hour I've got `112` new diffs with
`-removing`, since I rm'ed them manually.
```
$ ls /var/lib/docker/aufs/diff/ | grep removing | wc -l
112
```
You said "17.06 should no longer create orphaned diffs, but it won't
clean up the old ones yet.", but surely this cannot be correct, or am I
missing something? Are those tagged with `-removing` not orphaned?
@orf On a newer kernel, it's highly unexpected to have any issue at all
during removal. Are you mounting `/var/lib/docker` into a container?
I'll check in the aufs driver to see if there's a specific issue there
with it reporting a successful remove when it really wasn't.
We are not mounting `/var/lib/docker` into a container.
```
$ uname -a
Linux gitlab-cirunner 4.4.0-83-generic #106~14.04.1-Ubuntu SMP Mon Jun
26 18:10:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
```
We are running 14.04 LTS
Let me know if there is anything I can provide to help debug this.
For other reasons (swarm mode networking) I moved off 14.04 for Docker
machines.
On Mon, Aug 21, 2017 at 8:23 AM orf |notifications@github.com| wrote:
| We are not mounting /var/lib/docker into a container.
|
| $ uname -a
| Linux gitlab-cirunner 4.4.0-83-generic #106~14.04.1-Ubuntu SMP Mon Jun
26 18:10:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
|
| We are running 14.04 LTS
|
| —
| You are receiving this because you commented.
| Reply to this email directly, view it on GitHub
| |https://github.com/moby/moby/issues/22207#issuecomment-323773033|, or
mute
| the thread
|
|https://github.com/notifications/unsubscribe-auth/AADRIfE2B2HNpbsKyTOj1CwGzulRT2C0ks5saaDPgaJpZM4IMGt2|
|
.
|
This appears to be worse with 17.06.01-ce. I updated a build machine to
this version and immediately started seeing the `*-init-removing` and
the `*-removing` directories left around as part of the build process. I
stopped the service, removed the `/var/lib/docker` directory, restarted
the service and after a few builds was close to out of disk space
again. I stopped the service again, ran `apt-get purge docker-ce`,
removed `/var/lib/docker` again and installed the 17.06.0-ce version.
Not getting the extra directories in `/var/lib/docker/aufs/diff` and
disk space is representative of images that are on the build machine.
I've reproduced the behavior on my development machine as well - just
building an image seems to create these extra directories for each layer
of the image so I would run out of disk space really quick. Again,
reverting to 17.06.0-ce seems to not have the problem so I'm going to
stay there for now.
@mmanderson Thanks for reporting. Taking a look at changes in the AUFS
driver.
@mmanderson Do you have any containers in the `Dead` state in `docker ps
-a`?
All of my docker build servers are running out of space.

I have upgraded within the last week or so to Docker version 17.06.1-ce,
build 874a737. I believe that nothing else has changed and that this
issue either emerged or manifested as part of the upgrade process. The
aufs diff directory is massive and I already pruned all images and
dangling volumes.
[issue-22207.txt](https://github.com/moby/moby/files/1240116/issue-22207.txt)
@cpuguy83 No containers in any state. Here is what I just barely did to
demonstrate this with 17.06.01-ce:
1. Started with a fresh install of docker 17.06.01-ce on Ubuntu 16.04.03
LTS (i.e. docker not installed and no /var/lib/docker directory).
After install verified an empty /var/lib/docker/aufs/diff directory.
2. Ran a docker build with a fairly simple dockerfile based on
ubuntu:latest - all it does is pull statsd_exporter from github and
extract it into /usr/bin (see attached file).
3. After running the build run `docker ps -a` to show no containers in
any state. There are several `*-remaining` folder in the `/var/lib/docker/aufs/diff` folder.
4. Run `docker system df` to verify images, container, and volumes. Result is
```
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 2 0 132.7MB 132.7MB (100%)
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
```
6. Running `du -sch /var/lib/docker/*/` shows 152M for `/var/lib/docker/aufs/`
7. Run `docker rmi $(docker images -q)` to remove the built image layers.
Running `docker system df` after this shows all zeros. Running `du
-sch /var/lib/docker/*/` shows 152M for `/var/lib/docker/aufs/` and
there are `*-remaining` folders for all of the folders that didn't have
them before along with the existing `*-remaining` folders that are still
there.
@erikh is this the issue you are experiencing?
@cpuguy83 After
uninstalling 17.06.01-ce, removing the /var/lib/docker directory and
installing 17.06.0-ce I try to run the same build. The build fails
because of the `ADD from remote URL's` bug that was fixed in 17.06.01.
However I don't get any `*-remaining` directories for the steps that do
complete and after cleaning up everything with `docker system prune` and
`docker rmi $(docker image -q)` the `/var/lib/docker/aufs/diff`
directory is again empty and the space is freed.
Thanks all, this is a regression in 17.06.1...
PR to fix is here: https://github.com/moby/moby/pull/34587
awesome, thanks for the quick patch @cpuguy83!
/cc @erikh
@rogaha! yes, thanks to you and @cpuguy83!
Thank you so much @Karreg for your [excellent
script](https://github.com/moby/moby/issues/22207#issuecomment-322707352).
After getting rid of all the old ophaned diffs and freeing huge amounts of lost disk space again we are using it now regularily to clean our VMs before installing new docker images. Great help and an almost perfect workaround for this issue now. @TP75
Looks like Docker, Inc. have some contracts with computer data storage manufacturers.
@Karreg's script worked fine for me and I freed all the space in the diffs directory.
Having the same issue.
Docker Host Details
root@UbuntuCont:~# docker info
Containers: 3
Running: 0
Paused: 0
Stopped: 3
Images: 4
Server Version: 17.06.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 14
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-93-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 3.358GiB
Name: UbuntuCont
ID: QQA5:DC5S:C2FL:LCC6:XY6E:V3FR:TRW3:VMOQ:QQKD:AP2M:H3JA:I6VX
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
root@UbuntuCont:/var/lib/docker/aufs/diff# ls
031c85352fe85f07fede77dee0ac9dc2c7723177a819e72c534e1399208c95fa
09d53040e7e6798b5987ea76fe4f84f0906785b94a392a72e8e41a66cd9f242d
09d53040e7e6798b5987ea76fe4f84f0906785b94a392a72e8e41a66cd9f242d-init
0fb1ffc90969e9706801e2a18870f3ecd857a58f1094fbb968b3fa873e4cf2e4
10549179bd21a9c7af018d4ef305bb9196413b9662fce333b607104c40f38781
10d86a48e03cabf9af2c765dc84824809f24674ac339e4b9ffe572f50bd26b9c-init-removing
10d86a48e03cabf9af2c765dc84824809f24674ac339e4b9ffe572f50bd26b9c-removing
2e226946e8e6c2b3613de2afcff4cbb9890b6d9bd365fdda121a51ae96ec5606
2e226946e8e6c2b3613de2afcff4cbb9890b6d9bd365fdda121a51ae96ec5606-init
3601f6953132f557df8b52e03016db406168d3d6511d7ff5c08a90925ea288da-init-removing
3601f6953132f557df8b52e03016db406168d3d6511d7ff5c08a90925ea288da-removing
4b29141243aea4e70472f25a34a91267ab19c15071862c53e903b99740603d4c-init-removing
4b29141243aea4e70472f25a34a91267ab19c15071862c53e903b99740603d4c-removing
520e3fcf82e0fbbb48236dd99b6dee4c5bb9073d768511040c414f205c787dc5-init-removing
520e3fcf82e0fbbb48236dd99b6dee4c5bb9073d768511040c414f205c787dc5-removing
59cbb25a4858e7d3eb9146d64ff7602c9abc68509b8f2ccfe3be76681481904f
5d1c661b452efce22fe4e109fad7a672e755c64f538375fda21c23d49e2590f6
605893aba54feee92830d56b6ef1105a4d2166e71bd3b73a584b2afc83319591
63bd53412210f492d72999f9263a290dfee18310aa0494cb92e0d926d423e281-init-removing
63bd53412210f492d72999f9263a290dfee18310aa0494cb92e0d926d423e281-removing
72146e759ab65c835e214e99a2037f4b475902fdbe550c46ea0d396fb5ab2779-init-removing
72146e759ab65c835e214e99a2037f4b475902fdbe550c46ea0d396fb5ab2779-removing
8147e0b06dcbce4aa7eb86ed74f4ee8301e5fe2ee73c3a80dcb230bd0ddfcc26-init-removing
8147e0b06dcbce4aa7eb86ed74f4ee8301e5fe2ee73c3a80dcb230bd0ddfcc26-removing
a72735551217bb1ad01b77dbdbb9b8effa9f41315b0c481f8d74b5606c50deb4
aa58f2000b9f7d1ed2a6b476740c292c3c716e1d4dc04b7718580a490bba5ee8
b552cb853e33a8c758cb664aec70e2c4e85eacff180f56cbfab988a8e10c0174-removing
cd80c351b81ed13c4b64d9dfdc20c84f6b01cbb3e26f560faf2b63dae12dec55-init-removing
cd80c351b81ed13c4b64d9dfdc20c84f6b01cbb3e26f560faf2b63dae12dec55-removing
fe903be376821b7afee38a016f9765136ecb096c59178156299acb9f629061a2
fe903be376821b7afee38a016f9765136ecb096c59178156299acb9f629061a2-init
@kasunsjc please read the posts just above yours.
I confirm upgrading to 17.06.2-ce solved this issue. I didn't have to manually the directories either (last time)
after the upgrade.
17.06.2-ce _appears_ to have fixed this for me as well. No more
`-removing` directories in there, got a decent amount of space back.
I'm assuming that the `-init` directories I have in `aufs/diff` are unrelated (some of them are pretty old). They are all small, though, so it hardly matters.
Updating to 17.07.0 solved the issue here
too, not even `docker system prune --all -f` would remove the
directories before but after upgrading they got autoremoved on reboot.
Confirming this issue was resolved on Ubuntu 16.04 with 17.06.2-ce. As
soon as it was updated, the space cleared.空#### Challenge Name
https://www.freecodecamp.com/challenges/sift-through-text-with-regular-expressions
#### Issue Description
Unable to edit the code at line that says //Change this Line
Clicking by mouse or using arrow keys on keyboard don't work, unless you go and edit some other line
and return to the line.
Issue is seen in Firefox and Chrome on mac and not on safari.
#### Browser Information
- Browser Name, Version: Google Chrome, 49.0.2623.110 (64-bit), Mozilla Firefox 44.0.2.
- Operating System: Mac OS X 10.11.3
- Mobile, Desktop, or Tablet: Desktop
#### Your Code
```
No user code required.
```
#### Screenshot
I've uploaded a video of the issue to understand it more, as an image won't suffice here.
https://youtu.be/2ScYLKg3xpU
+Closing as duplicate of #7847. Happy coding!
(250, 512) todo (2250,)
(500, 512) todo (2000,)
(750, 512) todo (1750,)
(1000, 512) todo (1500,)
(1250, 512) todo (1250,)
(1500, 512) todo (1000,)
(1750, 512) todo (750,)
(2000, 512) todo (500,)
(2250, 512) todo (250,)
(2500, 512) todo (0,)
True class: both
Text with highlighted words
net: Support the /etc/resolver DNS resolution configuration hierarchy on OS X
https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man5/resolver.5.html
OS X allows you to add TLD specific resolver configurations. Quite popular ones are /etc/resolver/vm for local virtual machines and /etc/resolver/dev for local development purposes.
https://golang.org/src/net/dnsclient_unix.go#L231
Go seems to be hardcoded to only take /etc/resolv.conf into account on Unix platforms.
I don't think Go-native DNS resolving mechanism is used on Mac.
https://golang.org/src/net/dnsclient_unix.go#L231 is not executed if I run
``` go
addrs, err := net.LookupHost("google.com")
```
on my Mac.
If I enable debugging (`GODEBUG=netdns=2 go run test.go`), the following is printed:
```
go package net: using cgo DNS resolver
go package net: hostLookupOrder(google.com) = cgo
```
which means that OS-native DNS resolving is used.
Can you supply an exact configuration file, Go code, actual and expected output?
@nodirt This is for a binary with cgo off.
If cgo is disabled then the pure go DNS resolver will be used. If you want
to use the Mac DNS resolver, plese build with cgo.
On Mon, 7 Sep 2015 07:47 Jonathan Rudenberg notifications@github.com
wrote:
| @nodirt https://github.com/nodirt This is for a binary with cgo off.
|
| —
| Reply to this email directly or view it on GitHub
| https://github.com/golang/go/issues/12524#issuecomment-138128691.
Shouldn't be a problem since this is needed only on a dev machine.
On Sun, Sep 6, 2015 at 4:06 PM Dave Cheney notifications@github.com wrote:
| If cgo is disabled then the pure go DNS resolver will be used. If you want
| to use the Mac DNS resolver, plese build with cgo.
|
| On Mon, 7 Sep 2015 07:47 Jonathan Rudenberg notifications@github.com
| wrote:
|
| | @nodirt https://github.com/nodirt This is for a binary with cgo off.
| |
| | —
| | Reply to this email directly or view it on GitHub
| | https://github.com/golang/go/issues/12524#issuecomment-138128691.
|
| —
| Reply to this email directly or view it on GitHub
| https://github.com/golang/go/issues/12524#issuecomment-138134749.
In this specific case, @Rotonen was using the Flynn binary that we distribute as a compiled
artifact, it is compiled without cgo to ease cross-compilation. Just
because the user is a developer doesn't mean that they are a Go
developer or want to compile the binary for themselves. The only question here is if this feature is out of scope for the pure-Go resolver.
cross compilation with cgo-enabled net package is not that hard.
You can reuse the package contained in binary distribution and
force internal linking.I don't see anything wrong with supporting the OS X /etc/resolver
directory. That said, my understanding is that the Go DNS resolver does not work well on most OS X machines. That is why it is disabled by default.
This would be great in all platforms anyway. Is there any disadvantage from supporting this behaviour? It seems that it'd neatly resolve the need to install and configure dnsmasq to provide the simple function of having different resolvers for different TLDs.
i know this issue is quite old but has there been any traction on this?
any resolution?
Any updates would be posted here. No updates have
been posted here.
See `resolver(5)`. Just reading the files out of /etc/resolver/* will
miss out on other mechanisms for configuring the same thing, for example
configuration profiles or IKE attributes.
Just stumbled upon this today while attempting to use coredns as a dns proxy for local development. It's a real bummer to discover how naive our support for os x is.
We've generally assumed people use cgo on Darwin, so
this bug has never been a priority.
I do admit that practically means that Darwin binaries need to be built
on Darwin, which is difficult for people wanting to cross-compile for a dozen platforms as part of their release process.
Perhaps on Darwin without cgo we could just shell out to a
program to do DNS resolution (e.g. host, dig, nslookup?). At least
`nslookup` has an interactive mode that would permit re-using a child process for multiple lookups, if that proves necessary for performance.
I think reality is
most command-line utilities will compile for two platforms: Linux and
OS X, and the OS X build will always have cgo disabled. Some subset of
the OS X users are using VPN, expect .local names to resolve, or have
some other situation where hostname resolution is more than "just query this one DNS server always".
Some subset of those users will actually open an issue with the tool,
and of those even a smaller subset identify go as the problem and raise
an issue here.
So I think you
underestimate the impact of the problem.
Shelling out to `nslookup` will not fix it. The problem is "doing a DNS
query" is not the same thing as "resolving a hostname". Resolving a
hostname involves more, such as:
- /etc/hosts
- [RFC6762](https://tools.ietf.org/html/rfc6762) `.local` names
- Other hostname resolution protocols, such as [NIS](https://www.thkukuk.de/nis/) or [LDAP](https://serverfault.com/questions/166981/how-to-configure-ldap-to-resolve-host-names), if configured
- Honoring the domain search path, if configured
- If DNS is to be used, determining which server to use.
Tools like `host`, `nslookup`, and `dig` do DNS queries by design, not resolve hostnames. This is equally true on Linux as well as OS X. Unfortunately somehow OS X has acquired some lore about having "two DNS systems", which is simply false. Or at least it was false, until go command-line utilities gained popularity.
If you do want to shell out to
a command to perform host resolution, the correct command on OS X is
`dscacheutil -q host -a name $hostname`. This is analogous to `getent
hosts $hostname` on Linux.
Another path is to make the go resolver's behavior more consistent with
the OS X system resolver. This begins with obtaining resolver
configuration from [SystemConfiguration.framework](https://developer.apple.com/documentation/systemconfiguration) or `scutil --dns`, _not_ `/etc/resolv.conf`.
`dscacheutil` sounds good. I was thinking of `lookupd` when I wrote the comment above but my local machine didn't have `lookupd` so I omitted it. Now I see that `dscacheutil` replaced `lookupd`.
I don't think we want to get into the business of reimplementing Darwin's name resolution.
@randall77, since you're having fun with macOS lately,
any thoughts here? Could we have non-cgo binaries still call into the
macOS name resolution code somehow with some assembly/linker goo?
Let's see if we can use the libSystem bindings directly even when cgo is ostensibly disabled.
| expect .local names to resolve
I actually expect `.local` names to resolve on all platforms per mDNS anyway, if the target responds to the broadcast appropriately.
@bitglue is correct. I think a lot of people are going to file issues against a tool and not raise issues to the Go project.
A good example of this is Homebrew. They recently removed support for
options in their install which now means people can't install packages
written in Go, like Hashicorp's Vault with cgo support. We used to be
able to do 'brew install vault --with-dyanmic' to enable cgo support to
get correct DNS resolution, but now that is removed and we're stuck
with having to hack their install script to get Vault compiled with cgo.
It
would be nice to see Go's native resolver work in a less naive fashion
so we don't need to worry about this issue anymore.
See https://github.com/Homebrew/homebrew-core/issues/33507 for
reference.
I would chime in and venture that the root of this issue might be that
the `net` package treats all Unix systems the same. Perhaps there should
be a stubbed out variant for MacOS to deal with it's `configd` based
resolution?
This issue, as has been noted, will affect every binary not compiled with cgo when users are using VPNs, which would seem to be a common use case.
@rsc Can you provide some detail on how we might be able to call libSystem bindings without cgo?
@grantseltzer The current runtime package is full of examples of calling into libSystem. See runtime/sys_darwin.go.
I'm taking a stab at this, I have a branch on my github fork here: https://github.com/grantseltzer/go but could use some help
The function call i'm looking for is [res_search](https://developer.apple.com/library/archive/documentation/System/Conceptual/ManPages_iPhoneOS/man3/res_query.3.html) which is in libresolv (`/usr/lib/libresolv.9.dylib`)
I have the cgo_import_dynamic directive:
`//go:cgo_import_dynamic libresolv_res_search res_search "/usr/lib/libresolv.9.dylib"`
The Go function that makes the libcCall call and trampoline (`sys_darwin.go`):
```go
//go:nosplit
//go:cgo_unsafe_args
func Res_search(name *byte, class int32, rtype int32, answer *byte, anslen int32) int32 {
return libcCall(unsafe.Pointer(funcPC(res_search_trampoline)), unsafe.Pointer(|name))
}
func res_search_trampoline()
```
and defined the amd64 assembly routine (`sys_darwin_amd64.s`):
```asm
TEXT runtime·res_search_trampoline(SB),NOSPLIT,$0
PUSHQ BP
MOVQ SP, BP
MOVL 0(DI), SI // arg 1 name
MOVQ 8(DI), DX // arg 2 class
MOVQ 12(DI), CX // arg 3 type
MOVQ 16(DI), R8 // arg 4 answer
MOVQ 24(DI), R9 // arg 5 anslen
CALL libresolv_res_search(SB)
POPQ BP
RET
```
When testing the function (which is
exported just for testing), I get a return code of `-1` and no response
in buffer:
```go
func main() {
name := "google.com"
var nameAddr = name[0]
var buffer = [512]byte{}
x := runtime.Res_search(|nameAddr, 255,
255, |buffer[0], 512)
fmt.Println("res_search return code:", x)
fmt.Printf("Buffer: %s\n", buffer)
}
```
Anything glaring that i'm missing? Perhaps my datatypes or stack offset
sizes.
Most importantly, can someone link me to documentation on how to debug
the code at this level?
EDIT:
more testing/version information:
uname -a
```
Darwin Grant-SelzterRichman 17.7.0 Darwin Kernel Version 17.7.0: Thu Dec
20 21:47:19 PST 2018; root:xnu-4570.71.22~1/RELEASE_X86_64 x86_64
```
```
go version go1.11.5 darwin/amd64
```
CC @randall77
I believe I was misusing MOVQ vs MOVL (now potentially fixed to this):
```asm
TEXT runtime·res_search_trampoline(SB),NOSPLIT,$0
PUSHQ BP
MOVQ SP, BP
MOVQ 0(DI), DI // arg 1 name
MOVL 8(DI), SI // arg 2 class
MOVL 12(DI), DX // arg 3 type
MOVQ 16(DI), CX // arg 4 answer
MOVL 24(DI), R8 // arg 5 anslen
CALL libresolv_res_search(SB)
POPQ BP
RET
```
Still not there yet though.
I'm stepping through with delve and my hunch is that RDI has not been
properly initialized when entering the res_search_trampoline in
`sys_darwin_amd64.s`
When moving from offsets off DI to the respective arg registers the
program appears to be blowing away the destination registers instead
(pictured below):

Another thing that's confusing me is that when I step into `Res_search`
(the go function that makes the call to `libcCall`) my arguments are
unreadable:

Anyone have a hunch of why this call isn't working or have advice on
debugging?
Update:
I am getting DNS records using the libresolv `res_search` binding with
cgo disabled :D!
Working to confirm that
this actually honors the `/etc/resolver` files, not sure if it is at
the moment.

Would still love to hear an explanation for this, but the way I got it
working was by changing the order of the arguments being loaded to the
order of them listed in the dlv screenshot above:
```asm
TEXT runtime·res_search_trampoline(SB),NOSPLIT,$0
PUSHQ BP
MOVQ SP, BP
MOVL (DI), R8 // arg 5 anslen
MOVQ 16(DI), CX // arg 4 answer
MOVL 8(DI), SI // arg 2 class
MOVQ 0(DI), DI // arg 1 name
MOVL 12(DI), DX // arg 3 type
CALL libresolv_res_search(SB)
POPQ BP
RET
```
Current update: Calling this routine does in fact honor `/etc/resolver/`
files. ~~I'm currently trying to figure out an issue where the
specified query 'type' is not being honored and only AAAA queries are
sent.~~
My questions for once I fix that and prepare it for a CL:
1) Should this routine be defined for all of i386, x86_64, ARM, and ARM64?
2) What testing mechanisms exist
for code at this level beyond manually?
3) Should the cgo bindings exist in runtime or are they appropriate for
the net package?
Opened #30686
Change https://golang.org/cl/166297 mentions this issue: `net: Use
libSystem bindings for DNS resolution on macos if CGO is unavailable`
If we want to accommodate several DNS stub resolver implementations,
typically it
would be as follows:
- well-cooked external getaddrinfo based one; currently enabled by
netdns=cgo,
- half-baked external resolver library, res_xxx, based one,
- from scratch; currently enabled by netdns=go.
However, I'm still not sure we really need to hold all of the
implementations in the package net. Is there any specific reason not making a new API that accepts external stub resolver implementations? Once we open up the API, we are also able to use the API for upcoming fancy technologies such as DoH (DNS over HTTPS).
```
TEXT runtime·res_search_trampoline(SB),NOSPLIT,$0
PUSHQ BP
MOVQ SP, BP
MOVL (DI), R8 // arg 5 anslen
MOVQ 16(DI), CX // arg 4 answer
MOVL 8(DI), SI // arg 2 class
MOVQ 0(DI), DI // arg 1 name
MOVL 12(DI), DX // arg 3 type
CALL libresolv_res_search(SB)
POPQ BP
RET
```
The last `MOVL` is using a `DI` value that just got clobbered
in the previous instruction. You have to load `DI` last.
The manpage is unclear about what the return value of res_search is. You
might need to call `libc_error` if the return value is |0 to get an
actual error code. See mmap for an example.
Debugging this stuff is hard generally. Sorry about that. It does seem
that you're making progress though.
By the way, if Darwin supports `res_nsearch` and friends, we should
probably use them, as they are thread-safe. `res_search` and `res_nsearch` normally return the length of the response and I assume the same is
true on Darwin.
@randall77 Ah that makes a lot of sense, thank you! I pushed changes
including the error checks (they return size of response, unless error
which is -1)
@ianlancetaylor I have been working on this today, as well as changing
the GODEBUG/CGO set logic discussed on gerrit.
`res_nsearch` is supported.
In order to use `res_nsearch` we would have to use `res_ninit`. I don't
know whether `res_search` would also work OK, but it's troubling that
it's not considered to be thread-safe on GNU/Linux. I don't know about Darwin. I don't know when the global variable is modified.
But I guess that to use `res_ninit` and `res_nsearch` we would need to at least know the size
of `res_state`. Probably the best approach would be to double-check
that on Darwin `res_state` is |= 512 bytes, as I expect it is, and then
use `[64]uint64`.
Change https://golang.org/cl/180842 mentions this issue: `net: fix non-cgo macOS resolver code`
Given that we already use the C library with cgo-based macOS builds (the default)
and that we in fact prefer cgoLookupHost to doing it ourselves, it
seems like Go should support /etc/resolver just fine out of the box.
CL 166297 (f6b42a5) added some code for the non-cgo builds, but (1) it doesn't work and (2) it's unclear that the non-cgo builds really need attention to this corner
case.
I sent CL 180843 to revert the recent changes, but I am inclined to
leave this bug closed, since again the cgo path should be handling
/etc/resolver just fine.
It seems odd that a deprecated mechanism on macOS would be the best
alternative. Perhaps a native builder for Darwin would fix the upstream
issues?
| it's unclear that the non-cgo builds really need attention to this
corner case.
This is an actual issue for us, as it presumably is for the original
reporter as well as others who've chimed in on this thread.
| This is an actual issue for us, as it presumably is for the original reporter
I've circumvented this since 2015 by not using [Flynn](https://flynn.io/),
and thus not needing the functionality in a non-cgo Go on Darwin. That
decision had nothing to do with this issue: I circumvented by beefing up
my machine-internal infrastructure stack so I did not have to rely on
the macOS `/etc/resolver/*` mechanism for rolling my own private TLD.
I've not encountered any software, Go or otherwise, since then, which
would not work on Darwin with my machine-local VM cluster and networking
setup using the `/etc/resolver/*` mechanism. I still use the
machine-local infra stack for evaluating new infrastructure stacks from
time to time. The circumstances for ending up in this corner are fairly
specific - when putting on the systems consultant hat from time to time,
I am a lone wolf for whom everything needs to work laptop-internally
for being able to go and showcase things trivially.
@cespare perhaps the
real solution for you would be to bake the contextual resolver
dynamicity into your corporate networking infrastructure and not try to do a full OSI layer cake in-machine. Just roll your own network-internal TLD root.
Or figure out cross compilation - it is less scary than it sounds like.
@rsc While it's true that that CGO is enabled by default in the compiler, it is consistently disabled by the maintainers of flagship Golang applications:
https://github.com/kubernetes/kubernetes/blob/v1.14.2/hack/lib/golang.sh#L377-L410
https://github.com/hashicorp/consul/blob/v1.5.1/build-support/functions/20-build.sh#L456
https://github.com/hashicorp/terraform/blob/v0.12.1/scripts/build.sh#L43
https://github.com/hashicorp/vault/blob/v1.1.3/Makefile#L18
This means, for all these tools, when run on an OS X machine, their DNS resolving is broken beyond the happy path of vanilla DNS - which from my personal experience, while admittedly short, has never been the case in an enterprise workplace, primarily due to VPNs.
It also means that whenever someone discovers that DNS is broken in any Golang tool they use, they'll eventually discover this thread, where we effectively told them to go pound sand, and take the issue up with the maintainer(s) of their tool as to why it was too difficult for them to build and release
their tool with the defaults turned on.
As to what the challenges are with leaving CGO enabled, I would
encourage you to take a look at the issues @caarlos0 has had while
trying to support CGO in the fabulous build tool GoReleaser:
https://github.com/goreleaser/goreleaser/issues/708
Additionally, the folks
at Homebrew, a commonly used package manager for OS X, are making it
increasingly more difficult to even support the option for maintainers
to support installation
flags, further increasing the issue:
https://github.com/Homebrew/homebrew-core/issues/33507
Ultimately, it's up to you @rsc and the example you want to lead with.
It's without a doubt beyond obnoxious that so many systems have failed
to get to the point where we have to even consider solving the problem in the language - nevertheless, here we are looking for a hero.
Add our non-corner case problem with this https://github.com/vapor-ware/ksync/issues/260
@rsc
Between the comments above, ones on the original issue, and through
speaking to people in person and on slack I know a lot of people/orgs
could really use your fixed change set. Kubectl, helm, vpn services, hashicorp tools, and many many others are affected by the lack of this feature.
What would you need to see to overturn your decision?空Build macos `oc` binary with CGO_ENABLED=1 so that it honours /etc/resolver?
|!--
Note that this is the issue tracker for OpenShift Origin.
OpenShift Installer issues should be reported at https://github.com/openshift/openshift-ansible/issues
Management Console issues are collected at https://github.com/openshift/origin-web-console/issues
Documentation issues are better reported at https://github.com/openshift/openshift-docs/issues
--|
##### Version
```
$ oc versionoc v4.0.0-0.171.0
kubernetes v1.12.4+a532756e37
```
##### Steps To Reproduce
1. Create a cluster locally which doesn't have public dns lookup (you
can use crc for it).
2. Don't append the dns details to `/etc/resolve.conf` but use
`/etc/resolver/|whatever_dns|` , in our case we use
`/etc/resolver/testing`.
3. Try to open the web console on the browser which works and it honour
the resolver file.
4. Try to use oc cli to login to cluster, it doesn't work until you add
that dns to `/etc/resolv.conf`
##### Current Result
User need to add the dns details to `/etc/resolv.conf`, where most of the app on the Mac does honour the resolver file and works.
##### Expected Result
Openshift client binary should also honour the `resolver` directory files.
##### Additional Information
[try to run `$ oc adm diagnostics` (or `oadm diagnostics`) command if possible]
[if you are reporting issue related to builds, provide build logs with `BUILD_LOGLEVEL=5`]
[consider attaching output of the `$ oc get all -o json -n |namespace|` command to the issue]
[visit https://docs.openshift.org/latest/welcome/index.html]
(250, 512) todo (2250,)
(500, 512) todo (2000,)
(750, 512) todo (1750,)
(1000, 512) todo (1500,)
(1250, 512) todo (1250,)
(1500, 512) todo (1000,)
(1750, 512) todo (750,)
(2000, 512) todo (500,)
(2250, 512) todo (250,)
(2500, 512) todo (0,)
True class: both
Text with highlighted words
helm upgrade fails with spec.clusterIP: Invalid value: "": field is immutable
When issue helm upgrade, it shows errors like, ("my-service" change from "clusterIP: None" to "type: LoadBalancer" without field clusterIP)
```
Error: UPGRADE FAILED: Service "my-service" is invalid: spec.clusterIP: Invalid value: "": field is immutable
```
However,
all other pods with new version are still going to be restarted, except
that "my-service" Type does not change to new type "LoadBalancer"
I understand that why upgrade failed because helm does not support
changing on some certain fields. But why helm still upgrades other services/pods by restarting it. Should helm does nothing if there is any error during the upgrade? I excepted helm to treat the whole set of services as a package to either upgrade all or none, but seems my expectation might be wrong.
And if we ever end up in such situation, then what we should to get out the situation? like how to upgrade "my-service" to have new type?
And if I use --dry-run option, helm does not show any errors.
Is this consider a bug or expected, i.e. upgrade throws some error but some service still gets upgraded.
|!-- If you need help or think you have found a bug, please help us with your issue by entering the following information (otherwise you can delete this text): --|
Output of `helm version`:
```
Client: |version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: |version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085",
GitTreeState:"clean"}
```
Output of `kubectl version`:
```
Client Version: version.Info{Major:"1", Minor:"16",
GitVersion:"v1.16.0",
GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77",
GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z",
GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+",
GitVersion:"v1.14.10-gke.27",
GitCommit:"145f9e21a4515947d6fb10819e5a336aff1b6959",
GitTreeState:"clean", BuildDate:"2020-02-21T18:01:40Z",
GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}
```
Cloud Provider/Platform (AKS, GKE, Minikube etc.):
GKE and Minkube
Not enough information has been provided to reproduce. Please tell us
how to create a reproducible chart, and which Helm commands you used.
Hi, here are the reproduce steps
Having two services yaml file as below.
nginx.yaml
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
```
prometheus.yaml
```
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: prometheus
spec:
template:
metadata:
labels:
app: prometheus
spec:
containers:
- image: prom/prometheus
name: prometheus
ports:
- containerPort: 9090
imagePullPolicy: Always
hostname: prometheus
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: prometheus
spec:
selector:
app: prometheus
clusterIP: None
ports:
- name: headless
port: 9090
targetPort: 0
```
Then put there two files in helm1/templates/ then install. It shows
prometheus service uses clusterIP and nginx version is 1.14.2
```
# helm upgrade --install test helm1
Release "test" does not exist. Installing it now.
NAME: test
LAST DEPLOYED: Tue Apr 21 20:42:55 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 |none| 443/TCP 35d
prometheus ClusterIP None |none| 9090/TCP 7s
# kubectl describe deployment nginx |grep Image
Image: nginx:1.14.2
```
Now update the section for nginx.yaml to new version 1.16
```
image: nginx:1.16
```
and prometheus.yaml by changing it to LoadBalancer.
```
spec:
selector:
app: prometheus
ports:
- name: "9090"
port: 9090
protocol: TCP
targetPort: 9090
type: LoadBalancer
```
Now put them as helm2 and do the upgrade. Then you can see the upgrade
throw some errors, but the nginx service goes through, by upgrade to a
new version, but the prometheus is not upgraded as it is still using
Cluster IP.
```
# helm upgrade --install test helm2
Error: UPGRADE FAILED: cannot patch "prometheus" with kind Service:
Service "prometheus" is invalid: spec.clusterIP: Invalid value: "":
field is immutable
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 |none| 443/TCP 35d
prometheus ClusterIP None |none| 9090/TCP 5m34s
# kubectl describe deployment nginx |grep Image
Image: nginx:1.16
```
helm list shows
```
# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
test default 2 2020-04-21 20:48:20.133644429 -0700 PDT failed
```
helm history
```
# helm history test
REVISION UPDATED
STATUS CHART APP VERSION DESCRIPTION
1 Tue Apr 21 20:42:55 2020 deployed helm-helm 1.0.0.6 Install
complete
2 Tue Apr 21 20:48:20 2020 failed helm-helm 1.0.0.6
Upgrade "test" failed: cannot patch "prometheus" with kind Service:
Service "prometheus" is invalid: spec.clusterIP: Invalid value: "":
field is immutable
```
We have the same behavior with v3.2.0,
downgrading to v3.1.3 is our temporary fix
I've got a lot of this with my Helm 2 -| 3 migration. When trying to
upgrade the converted Releases for the first time I get a lot of this.
This is for Nginx Ingress, Prometheus Operator, Graylog and Jaeger
charts so far. Most of them I'm content with just deleting the services and letting Helm recreate them but for Nginx Ingress this isn't an option...
Just found this https://github.com/helm/helm/issues/6378#issuecomment-557746499 which explains the problem in my case.
Closing as a duplicate of #6378. @cablespaghetti found the deeper explanation for this behaviour, which is described in great detail.
Let us know if that does not work for you.
@GaramNick why would downgrading fix this for you? Can you elaborate more on “what” was fixed by downgrading?
@bacongobbler While you're here. Is there any way to fix this situation without deleting the release and re-deploying? I can't seem
a way to do that under helm 2 or 3. I want to hack the existing release
data so Helm thinks the clusterIP has always been omitted and so no
patch is necessary.
Have you tried `kubectl edit`?
We have the same issue and downgrading to `3.1.3` fixed it also for us.
My guess is that it has to do with the new logic in
https://github.com/helm/helm/pull/7649/commits/d829343c1514db17bee7a90624d06cdfbffde963
considering this a `Create` and not an update thus trying to set empty
IP and not reusing the populated one
Interesting find. thank you for investigating.
@jlegrone any chance you might have time to look into this?
@bacongobbler Our CI/CD pipeline uses Helm to update our application
that includes a Service with type ClusterIP. The command:
```bash
helm upgrade --install --force \
--wait \
--set image.repository="$CI_REGISTRY_IMAGE" \
--set image.tag="$CI_COMMIT_REF_NAME-$CI_COMMIT_SHA" \
--set image.pullPolicy=IfNotPresent \
--namespace="$KUBE_NAMESPACE" \
"$APP_NAME" \
./path/to/charts/
```
On v3.2.0 this command fails with `Service "service-name" is invalid: spec.clusterIP: Invalid value: "": field is immutable`
On v3.1.3 this works fine.
Let me know if you like to have more info.
Same here. We had the following service.yaml working fine with helm2 for many many months.
After migration, the helm 3.2 `helm upgrade` command failed with the save error as above. Downgrading to 3.1.3 resolved it.
```
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.global.name }}
namespace: {{ index .Values.global.namespace .Values.global.env }}
labels:
microservice: {{ .Values.global.name }}
spec:
type: ClusterIP
ports:
- port: 8080
selector:
microservice: {{ .Values.global.name }}
```
| We have the same issue and downgrading to
3.1.3 fixed it also for us. My guess is that it has to do with the new
logic in d829343 considering this a Create and not an update thus trying
to set empty IP and not reusing the populated one
@n1koo Can you explain why you think this is the code causing the issue? As this is the install and not upgrade code, and also the [code in 3.1](https://github.com/helm/helm/blob/release-3.1/pkg/action/install.go#L293) is a ``create` and it works.
I
am reviewing the issue with @adamreese , and we _think_ it is the patch
that @n1koo identified. The Create method will bypass the normal 3-way
diff on the Service, which will result in the service's clusterIP being
set to "" instead of the value populated by Kubernetes. As a
result, the manifest sent to the API server _appears_ to be resetting
the cluster IP, which is illegal on a service (and definitely not what the user intended).
We're still looking into this and I will update if we learn more.
So https://github.com/helm/helm/issues/6378#issuecomment-557746499
is correct. Please read that before continuing on with this issue. If
`clusterIP: ""` is set, Kubernetes will assign an IP. On the next Helm
upgrade, if `clusterIP:""` again, it will give the
error above, because it appears _to Kubernetes_ that you are trying to
reset the IP. (Yes, Kubernetes modifies the `spec:` section of a
service!)
When the `Create` method bypasses the 3-way diff, it sets `clusterIP:
""` instead of setting it to the IP address assigned by Kubernetes.
To reproduce:
```
$ helm create issue7956
$ # edit issue7956/templates/service.yaml and add `clusterIP: ""` under `spec:`
$ helm upgrade --install issue7956 issue7956
...
$ helm upgrade issue7956 issue7956
Error: UPGRADE FAILED: cannot patch "issue-issue7956" with kind Service: Service "issue-issue7956" is invalid: spec.clusterIP: Invalid value: "": field is immutable
```
The second time you run the upgrade, it will fail.
I cannot reproduce @IdanAdar 's case on `master`.
@GaramNick there is not enough info about the service you are using for
us to reproduce your error.
My situation:
`version.BuildInfo{Version:"v3.2.0",
GitCommit:"e11b7ce3b12db2941e90399e874513fbd24bcb71",
GitTreeState:"clean", GoVersion:"go1.13.10"}`
also tested w/
`version.BuildInfo{Version:"v3.2.1",
GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a",
GitTreeState:"clean", GoVersion:"go1.13.10"}`
given the following service template:
```
apiVersion: v1
kind: Service
metadata:
name: {{ include "app.fullname" . }}
labels:
{{- include "app.labels" . | nindent 4 }}
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: {{ include "app.fullname" . }}_mapping
prefix: /{{ include "app.fullname" . }}
host: "^{{ include "app.fullname" . }}.*"
host_regex: true
service: {{ include "app.fullname" . }}.{{ .Release.Namespace }}
rewrite: ""
timeout_ms: 60000
bypass_auth: true
cors:
origins: "*"
methods: POST, GET, OPTIONS
headers:
- Content-Type
- Authorization
- x-client-id
- x-client-secret
- x-client-trace-id
- x-flow-proto
---
apiVersion: ambassador/v1
kind: Mapping
name: {{ include "app.fullname" . }}_swagger_mapping
ambassador_id: corp
prefix: /swagger
host: "^{{ include "app.fullname" . }}.corp.*"
host_regex: true
service: {{ include "app.fullname" . }}.{{ .Release.Namespace }}
rewrite: ""
bypass_auth: true
cors:
origins: "*"
methods: POST, GET, OPTIONS
headers:
- Content-Type
- x-client-id
- x-client-secret
- Authorization
- x-flow-proto
namespace: {{ .Release.Namespace }}
spec:
type: {{ .Values.service.type }}
selector:
{{- include "app.selectorLabels" . | nindent 4 }}
ports:
- port: {{ .Values.service.port }}
name: http-rest-hub
targetPort: http-rest
- port: {{ .Values.service.healthPort }}
name: http-health
targetPort : http-health
```
which results in the following after `upgrade --install`:
```
apiVersion: v1
kind: Service
metadata:
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: hub-alt-bor_mapping
prefix: /hub-alt-bor
host: "^hub-alt-bor.*"
host_regex: true
service: hub-alt-bor.brett
rewrite: ""
timeout_ms: 60000
bypass_auth: true
cors:
origins: "*"
methods: POST, GET, OPTIONS
headers:
- Content-Type
- Authorization
- x-client-id
- x-client-secret
- x-client-trace-id
- x-flow-proto
---
apiVersion: ambassador/v1
kind: Mapping
name: hub-alt-bor_swagger_mapping
ambassador_id: corp
prefix: /swagger
host: "^hub-alt-bor.corp.*"
host_regex: true
service: hub-alt-bor.brett
rewrite: ""
bypass_auth: true
cors:
origins: "*"
methods: POST, GET, OPTIONS
headers:
- Content-Type
- x-client-id
- x-client-secret
- Authorization
- x-flow-proto
meta.helm.sh/release-name: alt-bor
meta.helm.sh/release-namespace: brett
creationTimestamp: ...
labels:
app: hub
app.kubernetes.io/instance: alt-bor
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: hub
app.kubernetes.io/version: v1.6.0-rc.26
deploy.xevo.com/stackname: bor-v0.1-test
helm.sh/chart: hub-0.0.4
owner: gateway
ownerSlack: TODOunknown
name: hub-alt-bor
namespace: brett
resourceVersion: ...
selfLink: ...
uid: ...
spec:
clusterIP: 172.20.147.13
ports:
- name: http-rest-hub
port: 80
protocol: TCP
targetPort: http-rest
- name: http-health
port: 90
protocol: TCP
targetPort:
http-health
selector:
app.kubernetes.io/instance: alt-bor
app.kubernetes.io/name: hub
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
```
If I then upload this exact same chart as version 0.0.5 and `upgrade
--install` again I get the following:
`Error: UPGRADE FAILED: failed to replace object: Service "hub-alt-bor"
is invalid: spec.clusterIP: Invalid value: "": field is immutable`
The only difference is the value of the `helm.sh/chart` label which now
has a value of `hub-0.0.5`
This is a huge blocker.
| @GaramNick there is not enough info about the service you are using
for us to reproduce your error.
@technosophos What do you need? Happy to provide more details!
Update! The update fails ONLY when using `helm upgrade --install` w/
`--force`. Less of a blocker now.
Oh! That is interesting. That should make the error easier to track
down.
Hello @technosophos @bacongobbler we have the same 2 issues:
`version.BuildInfo{Version:"v3.2.1",
GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a",
GitTreeState:"clean", GoVersion:"go1.13.10"}`
1. Issue
We have `Service` template without `clusterIP` but kubernetes will
assign `clusterIP` automatically:
```
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Values.image.name }}
release: {{ .Release.Name }}
spec:
type: ClusterIP
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.port }}
protocol: TCP
name: http
selector:
app: {{ .Values.image.name }}
release: {{ .Release.Name }}
```
after migrate to helm 3 with `helm 2to3 convert` and try upgrade the same release `helm3
upgrade --install --force`:
```
failed to replace object: Service "dummy-stage" is invalid:
spec.clusterIP: Invalid value: "": field is immutable
```
if i will do the same without `--force` -| `helm3 upgrade --install`
works fine without error.
2. Issue
if I want change `spec.selector.matchLabels` in Deployment which are
immutable field without `--force` I get error:
```
cannot patch "dummy-stage" with kind Deployment: Deployment.apps
"dummy-stage" is invalid: spec.selector: Invalid value:
v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"web-nerf-dummy-app"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
```
if I will do the same with `--force` I get error:
```
failed to replace object: Deployment.apps "dummy-stage" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"web-nerf-dummy-app"},
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is
immutable
```
Is it possible implement the same behaviour for `--force` as in `helm 2`
because we can without any error upgrade immutable filed ?
```
apiVersion: v1
kind: Service
metadata:
name: zipkin-proxy
namespace: monitoring
spec:
ports:
- port: 9411
targetPort: 9411
selector:
app: zipkin-proxy
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zipkin-proxy
namespace: monitoring
spec:
replicas: {{ .Values.zipkinProxy.replicaCount }}
template:
metadata:
labels:
app: zipkin-proxy
annotations:
prometheus.io/scrape: 'true'
spec:
containers:
- image: {{ .Values.image.repository }}/zipkin-proxy
name: zipkin-proxy
env:
- name: STORAGE_TYPE
value: stackdriver
```
`helm upgrade -i --debug --force --namespace monitoring zipkin-proxy --values ./values.yaml.tmp .
`
I have tried with removing the force option. I tried with v3.1.3, v3.2.0 as well as v3.2.1 still the same issue.
Stack trace
```
history.go:52: [debug] getting history for release zipkin-proxy
upgrade.go:84: [debug] preparing upgrade for zipkin-proxy
upgrade.go:92: [debug] performing update for zipkin-proxy
upgrade.go:234: [debug] creating upgraded release for zipkin-proxy
client.go:163: [debug] checking 2 resources for changes
client.go:195: [debug] error updating the resource "zipkin-proxy":
cannot
patch "zipkin-proxy" with kind Service: Service "zipkin-proxy" is
invalid: spec.clusterIP: Invalid value: "": field is immutable
client.go:403: [debug] Looks like there are no changes for Deployment
"zipkin-proxy"
upgrade.go:293: [debug] warning: Upgrade "zipkin-proxy" failed: cannot patch "zipkin-proxy" with kind Service: Service
"zipkin-proxy" is invalid: spec.clusterIP: Invalid value: "": field is
immutable
Error: UPGRADE FAILED: cannot patch "zipkin-proxy" with kind Service:
Service "zipkin-proxy" is invalid: spec.clusterIP: Invalid value: "":
field is immutable
helm.go:75: [debug] cannot patch "zipkin-proxy" with kind Service:
Service "zipkin-proxy" is invalid: spec.clusterIP: Invalid value: "": field is immutable
helm.sh/helm/v3/pkg/kube.(*Client).Update
/home/circleci/helm.sh/helm/pkg/kube/client.go:208
helm.sh/helm/v3/pkg/action.(*Upgrade).performUpgrade
/home/circleci/helm.sh/helm/pkg/action/upgrade.go:248
helm.sh/helm/v3/pkg/action.(*Upgrade).Run
/home/circleci/helm.sh/helm/pkg/action/upgrade.go:93
main.newUpgradeCmd.func1
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:137
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:914
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:864
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:74
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
UPGRADE FAILED
main.newUpgradeCmd.func1
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:139
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:914
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:864
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:74
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
```
I am having this issue when the Helm Chart version changes and having an existing deployment.
Using Helm v3.2.0
I can confirm that downgrading to 3.1.2 works.空spec.clusterIP: Invalid value: "": field is immutable
```
apiVersion: v1
kind: Service
metadata:
name: zipkin-proxy
namespace: monitoring
spec:
ports:
- port: 9411
targetPort: 9411
selector:
app: zipkin-proxy
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zipkin-proxy
namespace: monitoring
spec:
replicas: {{ .Values.zipkinProxy.replicaCount }}
template:
metadata:
labels:
app: zipkin-proxy
annotations:
prometheus.io/scrape: 'true'
spec:
containers:
- image: {{ .Values.image.repository }}/zipkin-proxy
name: zipkin-proxy
env:
- name: STORAGE_TYPE
value: stackdriver
```
`helm upgrade -i --debug --force --namespace monitoring zipkin-proxy --values ./values.yaml.tmp .
`
**I have tried with removing the force option. I tried with _**v3.1.3, v3.2.0 as well as v3.2.1**_ still the same issue.**
Stack trace
```
history.go:52: [debug] getting history for release zipkin-proxy
upgrade.go:84: [debug] preparing upgrade for zipkin-proxy
upgrade.go:92: [debug] performing update for zipkin-proxy
upgrade.go:234: [debug] creating upgraded release for zipkin-proxy
client.go:163: [debug] checking 2 resources for changes
client.go:195: [debug] error updating the resource "zipkin-proxy":
cannot
patch "zipkin-proxy" with kind Service: Service "zipkin-proxy" is
invalid: spec.clusterIP: Invalid value: "": field is immutable
client.go:403: [debug] Looks like there are no changes for Deployment
"zipkin-proxy"
upgrade.go:293: [debug] warning: Upgrade "zipkin-proxy" failed: cannot
patch "zipkin-proxy" with kind Service: Service "zipkin-proxy" is
invalid: spec.clusterIP: Invalid value: "": field is immutable
Error: UPGRADE FAILED: cannot patch "zipkin-proxy" with kind Service: Service "zipkin-proxy" is invalid: spec.clusterIP: Invalid value: "": field is immutable
helm.go:75: [debug] cannot patch "zipkin-proxy" with kind Service: Service "zipkin-proxy" is invalid: spec.clusterIP: Invalid value: "": field is immutable
helm.sh/helm/v3/pkg/kube.(*Client).Update
/home/circleci/helm.sh/helm/pkg/kube/client.go:208
helm.sh/helm/v3/pkg/action.(*Upgrade).performUpgrade
/home/circleci/helm.sh/helm/pkg/action/upgrade.go:248
helm.sh/helm/v3/pkg/action.(*Upgrade).Run
/home/circleci/helm.sh/helm/pkg/action/upgrade.go:93
main.newUpgradeCmd.func1
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:137
github.com/spf13/cobra.(*Command).execute/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:914
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:864
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:74
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
UPGRADE FAILED
main.newUpgradeCmd.func1
/home/circleci/helm.sh/helm/cmd/helm/upgrade.go:139
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:826
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:914
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:864
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:74
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
```
Output of `helm version`:
```
version.BuildInfo{Version:"v3.1.3", GitCommit:"0a9a9a88e8afd6e77337a3e2ad744756e191429a", GitTreeState:"clean", GoVersion:"go1.13.10"}
```
Output of `kubectl version`:
```
Client Version: version.Info{Major:"1", Minor:"13",
GitVersion:"v1.13.0",
GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.10-gke.36", GitCommit:"34a615f32e9a0c9e97cdb9f749adb392758349a6", GitTreeState:"clean", BuildDate:"2020-04-06T16:33:17Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}
```
Cloud Provider/Platform (AKS, GKE, Minikube etc.):
GKE
If I add the random ClusterIP from my pod CIDR range it works. But I have 50 chart modules can't do for all.
Hi @azarudeena. Have you looked at the explanation provided in #6378? This looks identical to the symptoms provided in that ticket. If so this should probably be closed as a duplicate issue.
(250, 512) todo (2250,)
(500, 512) todo (2000,)
(750, 512) todo (1750,)
(1000, 512) todo (1500,)
(1250, 512) todo (1250,)
(1500, 512) todo (1000,)
(1750, 512) todo (750,)
(2000, 512) todo (500,)
(2250, 512) todo (250,)
(2500, 512) todo (0,)
True class: both
Text with highlighted words
AKS API server support of standard load balancer (not user deployed services)
**What happened**:
When using the [Kubernetes Python API](https://github.com/kubernetes-client/python),
after approximately 5 minutes of inactivity, the connection stops
working. The packets are seemingly dropped on the other end.
In the case of the Python API, this causes the request to block for 15 minutes until
the Linux Kernel forces the socket closed with "Errno 110 Connection
timed out". The other side is closing the connection but not sending a
TCP `RST`, leaving the socket open on the client.
Under the hood the Python API uses a connection pool that reuses the TCP
Connections using the standard HTTP Keep-Alive mechanism.
This causes issues with the out of the box configuration in applications
such as [Airflow](https://github.com/apache/airflow) that use the
Kubernetes Python API.
I would assume other Kubernetes client libraries would have a similar
issue, but haven't investigated them myself.
**What you expected to happen**:
The connection is closed properly by the server instead of packets being dropped.
That way clients can fail fast and reestablish the connection.
If this is unavoidable, the exact values of when the control plane
starts dropping packets should documented clearly somewhere so the
Kubernetes Client Libraries and application developers can adjust their
defaults or provide AKS-specific configuration.
**How to reproduce it (as minimally and precisely as possible)**:
Run this inside of the cluster, I'm using a `python:3.7.3-stretch` container.
```python
import time
import logging
from kubernetes import config, client
logging.basicConfig(
format='%(asctime)s %(levelname)-8s %(message)s',
level = logging.DEBUG)
config.load_incluster_config()
v1 = client.CoreV1Api()
logging.info('Calling 1st time')
v1.list_namespaced_pod('default')
logging.info('Sleeping 5 minutes')
time.sleep(300)
logging.info('Calling 2nd time')
# this call will timeout after 15 minutes
v1.list_namespaced_pod('default')
logging.info('OK')
```
**Environment**:
- Kubernetes version: v1.12.6
- Size of cluster: 1
This becomes especially noticeable when using watches, as the connection
seems to remain open but stops receiving events. That connection is
expected to be idle for many minutes between events. I've noticed it
with the python client not inside and outside the cluster, connecting to the public master FQDN.
The basic nlb that uses the API has 4 Min idle timeout.
This is going to be changed in near future.
You might want to use init container with your deployment
Something like this:
initContainers:
- name: "sysctl"
image: "busybox:latest"
resources:
limits:
cpu: "10m"
memory: "8Mi"
requests:
cpu: "10m"
memory: "8Mi"
securityContext:
privileged: true
command:
- "/bin/sh"
- "-c"
- |
set -o errexit
set -o xtrace
sysctl -w net.ipv4.tcp_keepalive_time=180
sysctl -w net.ipv4.tcp_keepalive_intvl=180
sysctl -w net.ipv4.tcp_keepalive_probes=4
Hope it helps
This is directly related to the use of Azure Basic Load Balancers for
AKS clusters. https://github.com/Azure/AKS/issues/643 and is on the
public roadmap: https://github.com/Azure/AKS/projects/1
@jnoller Thanks for your reply. Glad to see the team is aware and has
plans to remedy it.
I'll follow along at #643
Re-opening as this is separate from customers being able to deploy
standard load balancers for their own services.
when this becomes GA; would this require a re-deployment of the AKS cluster? I would expect this to be an implicit change.
In case re-deployment is required, would enabling this need new commands in az cli/Terraform?
This issue has been automatically
marked as stale because it has not had activity in 90 days. It will be
closed if no further activity occurs. Thank you!
Hi,
is this still being worked on? It's still something we're having to
hack around.
Issue still active.
I can confirm this is an issue on AKS, I ran @itszakko job on AWS EKS
and there is no timeout. This affects Airflow on K8S and Kubeflow deployments.
Azure Logs: https://github.com/maganaluis/k8s-api-job/blob/master/azure-aks.log
AWS EKS: https://github.com/maganaluis/k8s-api-job/blob/master/aws-eks.log
@itszakko is your workaround simply querying it every 3rd minute or so?
@jluk @jnoller is this fixed with the GA?
This makes it hard to run services such as Airflow, Kubeflow and Prefect, where there often is long-lived process.
I believe this is the same issue highlighted here: https://github.com/Azure/AKS/issues/1877
I believe you are right @shooker
Do you know if a fix is under-way @shooker ?空Private Cluster API server silently closes connection
|!--
This issue tracker is a best-effort forum for users and customers to suggest features and report bugs.
If you are experiencing a service disruption when creating, upgrading, scaling, or deleting your cluster,
please open a support request with Azure support. Be sure to include your subscription id, resource group,
cluster name, and region. This information should __NOT__ be included in this issue due to its potential
sensitivity.
--|
|!--
Security issues should be reported to secure@microsoft.com and not via this public issue tracker.
--|
**What happened**: Attempts to use custom operator in private cluster to watch the cluster with long timeouts seem to lose connectivity with no connection reset being issued.
**What you expected to happen**:
Connection reset be issued.
**How to reproduce it (as minimally and precisely as possible)**:
Configure private cluster and use a custom operator to query the API
server to watch the cluster and notice connection is silently dropped after 4 minutes.
Simple test is to run pod in private AKS cluster and exec in to its shell to run curl :
curl --no-keepalive --http1.1 -o/dev/null --cacert /run/secrets/kubernetes.io/serviceaccount/ca.crt
-H "Authorization: bearer $(cat
/run/secrets/kubernetes.io/serviceaccount/token)"
"https://${KUBERNETES_SERVICE_HOST}/api/v1/namespaces/default/pods?watch=1|timeoutSeconds=300"
In public cluster this connection is dropped/reset at 4 minutes but on private cluster this runs perpetually.
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`): All
- Size of cluster (how many worker nodes are in the cluster?) N/A
- General description of workloads in the cluster (e.g. HTTP microservices, Java app, Ruby on Rails, machine learning, etc.) Custom operators
- Others: N/A

Hi erlucier, AKS bot here :wave:
Thank you for posting on the AKS Repo, I'll do my best to get a
kind human from the AKS team to assist you.
I might be just a bot, but I'm told my suggestions are normally quite
good, as such:
1) If this case is urgent, please [open a Support
Request](https://azure.microsoft.com/en-us/support/create-ticket/) so
that our 24/7 support team may help you faster.
2) Please abide by the [AKS repo Guidelines](https://github.com/Azure/AKS#bug-reports-) and [Code of Conduct](https://github.com/Azure/AKS#code-of-conduct).
3) If you're having an issue, could it be described on the [AKS Troubleshooting guides](https://docs.microsoft.com/en-us/azure/aks/troubleshooting) or [AKS Diagnostics](https://docs.microsoft.com/en-us/azure/aks/concepts-diagnostics)?
4) Make sure
your subscribed to the [AKS Release
Notes](https://github.com/Azure/AKS/releases) to keep up to date with
all that's new on AKS.
5) Make sure there isn't a duplicate of this issue already reported. If
there is, feel free to close this one and '+1' the existing issue.
6) If you have a question, do take a look at our [AKS FAQ](https://docs.microsoft.com/en-us/azure/aks/faq). We place the most common ones there!
Triage required from @Azure/aks-pm
Action required from @Azure/aks-pm
(250, 512) todo (2250,)
(500, 512) todo (2000,)
(750, 512) todo (1750,)
(1000, 512) todo (1500,)
(1250, 512) todo (1250,)
(1500, 512) todo (1000,)
(1750, 512) todo (750,)
(2000, 512) todo (500,)
(2250, 512) todo (250,)
(2500, 512) todo (0,)
True class: both